00:00:00.000 Started by upstream project "autotest-per-patch" build number 132406 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.088 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.089 The recommended git tool is: git 00:00:00.089 using credential 00000000-0000-0000-0000-000000000002 00:00:00.091 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.140 Fetching changes from the remote Git repository 00:00:00.142 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.206 Using shallow fetch with depth 1 00:00:00.206 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.206 > git --version # timeout=10 00:00:00.247 > git --version # 'git version 2.39.2' 00:00:00.247 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.274 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.274 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.666 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.679 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.694 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.694 > git config core.sparsecheckout # timeout=10 00:00:06.707 > git read-tree -mu HEAD # timeout=10 00:00:06.726 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.748 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.748 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.874 [Pipeline] Start of Pipeline 00:00:06.885 [Pipeline] library 00:00:06.886 Loading library shm_lib@master 00:00:06.887 Library shm_lib@master is cached. Copying from home. 00:00:06.899 [Pipeline] node 00:00:06.911 Running on VM-host-WFP1 in /var/jenkins/workspace/nvme-vg-autotest 00:00:06.912 [Pipeline] { 00:00:06.920 [Pipeline] catchError 00:00:06.922 [Pipeline] { 00:00:06.930 [Pipeline] wrap 00:00:06.937 [Pipeline] { 00:00:06.944 [Pipeline] stage 00:00:06.946 [Pipeline] { (Prologue) 00:00:06.964 [Pipeline] echo 00:00:06.965 Node: VM-host-WFP1 00:00:06.971 [Pipeline] cleanWs 00:00:06.979 [WS-CLEANUP] Deleting project workspace... 00:00:06.979 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.987 [WS-CLEANUP] done 00:00:07.181 [Pipeline] setCustomBuildProperty 00:00:07.262 [Pipeline] httpRequest 00:00:07.803 [Pipeline] echo 00:00:07.804 Sorcerer 10.211.164.20 is alive 00:00:07.814 [Pipeline] retry 00:00:07.816 [Pipeline] { 00:00:07.828 [Pipeline] httpRequest 00:00:07.832 HttpMethod: GET 00:00:07.833 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.834 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.850 Response Code: HTTP/1.1 200 OK 00:00:07.850 Success: Status code 200 is in the accepted range: 200,404 00:00:07.851 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.646 [Pipeline] } 00:00:12.663 [Pipeline] // retry 00:00:12.672 [Pipeline] sh 00:00:12.952 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.966 [Pipeline] httpRequest 00:00:13.715 [Pipeline] echo 00:00:13.717 Sorcerer 10.211.164.20 is alive 00:00:13.726 [Pipeline] retry 00:00:13.728 [Pipeline] { 00:00:13.743 [Pipeline] httpRequest 00:00:13.747 HttpMethod: GET 00:00:13.748 URL: http://10.211.164.20/packages/spdk_c1691a126f147c795009e27ad9d4a3eb66baa13c.tar.gz 00:00:13.748 Sending request to url: http://10.211.164.20/packages/spdk_c1691a126f147c795009e27ad9d4a3eb66baa13c.tar.gz 00:00:13.771 Response Code: HTTP/1.1 200 OK 00:00:13.771 Success: Status code 200 is in the accepted range: 200,404 00:00:13.772 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_c1691a126f147c795009e27ad9d4a3eb66baa13c.tar.gz 00:00:56.529 [Pipeline] } 00:00:56.546 [Pipeline] // retry 00:00:56.553 [Pipeline] sh 00:00:56.835 + tar --no-same-owner -xf spdk_c1691a126f147c795009e27ad9d4a3eb66baa13c.tar.gz 00:00:59.370 [Pipeline] sh 00:00:59.646 + git -C spdk log --oneline -n5 00:00:59.646 c1691a126 bdev: Relocate _bdev_memory_domain_io_get_buf_cb() close to _bdev_io_submit_ext() 00:00:59.646 5c8d99223 bdev: Factor out checking bounce buffer necessity into helper function 00:00:59.646 d58114851 bdev: Add spdk_dif_ctx and spdk_dif_error into spdk_bdev_io 00:00:59.646 32c3f377c bdev: Use data_block_size for upper layer buffer if hide_metadata is true 00:00:59.646 d3dfde872 bdev: Add APIs get metadata config via desc depending on hide_metadata option 00:00:59.664 [Pipeline] writeFile 00:00:59.679 [Pipeline] sh 00:00:59.962 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:59.974 [Pipeline] sh 00:01:00.256 + cat autorun-spdk.conf 00:01:00.257 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:00.257 SPDK_TEST_NVME=1 00:01:00.257 SPDK_TEST_FTL=1 00:01:00.257 SPDK_TEST_ISAL=1 00:01:00.257 SPDK_RUN_ASAN=1 00:01:00.257 SPDK_RUN_UBSAN=1 00:01:00.257 SPDK_TEST_XNVME=1 00:01:00.257 SPDK_TEST_NVME_FDP=1 00:01:00.257 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:00.265 RUN_NIGHTLY=0 00:01:00.267 [Pipeline] } 00:01:00.280 [Pipeline] // stage 00:01:00.295 [Pipeline] stage 00:01:00.297 [Pipeline] { (Run VM) 00:01:00.308 [Pipeline] sh 00:01:00.589 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:00.589 + echo 'Start stage prepare_nvme.sh' 00:01:00.589 Start stage prepare_nvme.sh 00:01:00.589 + [[ -n 7 ]] 00:01:00.589 + disk_prefix=ex7 00:01:00.589 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:00.589 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:00.589 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:00.589 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:00.589 ++ SPDK_TEST_NVME=1 00:01:00.589 ++ SPDK_TEST_FTL=1 00:01:00.589 ++ SPDK_TEST_ISAL=1 00:01:00.589 ++ SPDK_RUN_ASAN=1 00:01:00.589 ++ SPDK_RUN_UBSAN=1 00:01:00.589 ++ SPDK_TEST_XNVME=1 00:01:00.589 ++ SPDK_TEST_NVME_FDP=1 00:01:00.589 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:00.589 ++ RUN_NIGHTLY=0 00:01:00.589 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:00.589 + nvme_files=() 00:01:00.589 + declare -A nvme_files 00:01:00.589 + backend_dir=/var/lib/libvirt/images/backends 00:01:00.589 + nvme_files['nvme.img']=5G 00:01:00.589 + nvme_files['nvme-cmb.img']=5G 00:01:00.589 + nvme_files['nvme-multi0.img']=4G 00:01:00.589 + nvme_files['nvme-multi1.img']=4G 00:01:00.589 + nvme_files['nvme-multi2.img']=4G 00:01:00.589 + nvme_files['nvme-openstack.img']=8G 00:01:00.589 + nvme_files['nvme-zns.img']=5G 00:01:00.589 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:00.589 + (( SPDK_TEST_FTL == 1 )) 00:01:00.589 + nvme_files["nvme-ftl.img"]=6G 00:01:00.589 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:00.589 + nvme_files["nvme-fdp.img"]=1G 00:01:00.589 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:00.589 + for nvme in "${!nvme_files[@]}" 00:01:00.589 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:01:00.589 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:00.589 + for nvme in "${!nvme_files[@]}" 00:01:00.589 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-ftl.img -s 6G 00:01:00.589 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:00.589 + for nvme in "${!nvme_files[@]}" 00:01:00.589 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:01:00.589 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:00.589 + for nvme in "${!nvme_files[@]}" 00:01:00.589 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:01:00.848 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:00.848 + for nvme in "${!nvme_files[@]}" 00:01:00.848 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:01:00.848 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:00.848 + for nvme in "${!nvme_files[@]}" 00:01:00.848 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:01:00.848 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:00.848 + for nvme in "${!nvme_files[@]}" 00:01:00.848 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:01:00.848 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:00.848 + for nvme in "${!nvme_files[@]}" 00:01:00.848 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-fdp.img -s 1G 00:01:00.848 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:00.848 + for nvme in "${!nvme_files[@]}" 00:01:00.848 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:01:01.107 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:01.107 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:01:01.366 + echo 'End stage prepare_nvme.sh' 00:01:01.366 End stage prepare_nvme.sh 00:01:01.377 [Pipeline] sh 00:01:01.659 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:01.660 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex7-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:01.660 00:01:01.660 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:01.660 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:01.660 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:01.660 HELP=0 00:01:01.660 DRY_RUN=0 00:01:01.660 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme-ftl.img,/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,/var/lib/libvirt/images/backends/ex7-nvme-fdp.img, 00:01:01.660 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:01.660 NVME_AUTO_CREATE=0 00:01:01.660 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,, 00:01:01.660 NVME_CMB=,,,, 00:01:01.660 NVME_PMR=,,,, 00:01:01.660 NVME_ZNS=,,,, 00:01:01.660 NVME_MS=true,,,, 00:01:01.660 NVME_FDP=,,,on, 00:01:01.660 SPDK_VAGRANT_DISTRO=fedora39 00:01:01.660 SPDK_VAGRANT_VMCPU=10 00:01:01.660 SPDK_VAGRANT_VMRAM=12288 00:01:01.660 SPDK_VAGRANT_PROVIDER=libvirt 00:01:01.660 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:01.660 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:01.660 SPDK_OPENSTACK_NETWORK=0 00:01:01.660 VAGRANT_PACKAGE_BOX=0 00:01:01.660 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:01.660 FORCE_DISTRO=true 00:01:01.660 VAGRANT_BOX_VERSION= 00:01:01.660 EXTRA_VAGRANTFILES= 00:01:01.660 NIC_MODEL=e1000 00:01:01.660 00:01:01.660 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:01.660 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:04.193 Bringing machine 'default' up with 'libvirt' provider... 00:01:05.595 ==> default: Creating image (snapshot of base box volume). 00:01:05.595 ==> default: Creating domain with the following settings... 00:01:05.595 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732114445_ec4f031710b76c450530 00:01:05.595 ==> default: -- Domain type: kvm 00:01:05.595 ==> default: -- Cpus: 10 00:01:05.595 ==> default: -- Feature: acpi 00:01:05.595 ==> default: -- Feature: apic 00:01:05.595 ==> default: -- Feature: pae 00:01:05.595 ==> default: -- Memory: 12288M 00:01:05.595 ==> default: -- Memory Backing: hugepages: 00:01:05.595 ==> default: -- Management MAC: 00:01:05.595 ==> default: -- Loader: 00:01:05.595 ==> default: -- Nvram: 00:01:05.595 ==> default: -- Base box: spdk/fedora39 00:01:05.595 ==> default: -- Storage pool: default 00:01:05.595 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732114445_ec4f031710b76c450530.img (20G) 00:01:05.595 ==> default: -- Volume Cache: default 00:01:05.595 ==> default: -- Kernel: 00:01:05.595 ==> default: -- Initrd: 00:01:05.595 ==> default: -- Graphics Type: vnc 00:01:05.595 ==> default: -- Graphics Port: -1 00:01:05.595 ==> default: -- Graphics IP: 127.0.0.1 00:01:05.595 ==> default: -- Graphics Password: Not defined 00:01:05.595 ==> default: -- Video Type: cirrus 00:01:05.595 ==> default: -- Video VRAM: 9216 00:01:05.595 ==> default: -- Sound Type: 00:01:05.595 ==> default: -- Keymap: en-us 00:01:05.595 ==> default: -- TPM Path: 00:01:05.595 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:05.595 ==> default: -- Command line args: 00:01:05.595 ==> default: -> value=-device, 00:01:05.595 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:05.595 ==> default: -> value=-drive, 00:01:05.595 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:05.595 ==> default: -> value=-device, 00:01:05.595 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:05.595 ==> default: -> value=-device, 00:01:05.595 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:05.595 ==> default: -> value=-drive, 00:01:05.595 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-1-drive0, 00:01:05.595 ==> default: -> value=-device, 00:01:05.595 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:05.595 ==> default: -> value=-device, 00:01:05.595 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:05.595 ==> default: -> value=-drive, 00:01:05.595 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:05.595 ==> default: -> value=-device, 00:01:05.595 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:05.595 ==> default: -> value=-drive, 00:01:05.595 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:05.595 ==> default: -> value=-device, 00:01:05.595 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:05.595 ==> default: -> value=-drive, 00:01:05.595 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:05.595 ==> default: -> value=-device, 00:01:05.595 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:05.595 ==> default: -> value=-device, 00:01:05.595 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:05.595 ==> default: -> value=-device, 00:01:05.595 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:05.595 ==> default: -> value=-drive, 00:01:05.595 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:05.595 ==> default: -> value=-device, 00:01:05.595 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:05.854 ==> default: Creating shared folders metadata... 00:01:05.854 ==> default: Starting domain. 00:01:07.761 ==> default: Waiting for domain to get an IP address... 00:01:25.856 ==> default: Waiting for SSH to become available... 00:01:25.856 ==> default: Configuring and enabling network interfaces... 00:01:30.049 default: SSH address: 192.168.121.157:22 00:01:30.049 default: SSH username: vagrant 00:01:30.049 default: SSH auth method: private key 00:01:32.608 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:42.591 ==> default: Mounting SSHFS shared folder... 00:01:43.529 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:43.529 ==> default: Checking Mount.. 00:01:44.909 ==> default: Folder Successfully Mounted! 00:01:44.909 ==> default: Running provisioner: file... 00:01:46.291 default: ~/.gitconfig => .gitconfig 00:01:46.551 00:01:46.551 SUCCESS! 00:01:46.551 00:01:46.551 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:46.551 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:46.551 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:46.551 00:01:46.560 [Pipeline] } 00:01:46.577 [Pipeline] // stage 00:01:46.586 [Pipeline] dir 00:01:46.587 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:01:46.589 [Pipeline] { 00:01:46.602 [Pipeline] catchError 00:01:46.603 [Pipeline] { 00:01:46.615 [Pipeline] sh 00:01:46.922 + vagrant ssh-config --host vagrant 00:01:46.922 + sed -ne /^Host/,$p 00:01:46.922 + tee ssh_conf 00:01:50.231 Host vagrant 00:01:50.231 HostName 192.168.121.157 00:01:50.231 User vagrant 00:01:50.231 Port 22 00:01:50.231 UserKnownHostsFile /dev/null 00:01:50.231 StrictHostKeyChecking no 00:01:50.231 PasswordAuthentication no 00:01:50.231 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:50.231 IdentitiesOnly yes 00:01:50.231 LogLevel FATAL 00:01:50.231 ForwardAgent yes 00:01:50.231 ForwardX11 yes 00:01:50.231 00:01:50.246 [Pipeline] withEnv 00:01:50.248 [Pipeline] { 00:01:50.262 [Pipeline] sh 00:01:50.545 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:50.546 source /etc/os-release 00:01:50.546 [[ -e /image.version ]] && img=$(< /image.version) 00:01:50.546 # Minimal, systemd-like check. 00:01:50.546 if [[ -e /.dockerenv ]]; then 00:01:50.546 # Clear garbage from the node's name: 00:01:50.546 # agt-er_autotest_547-896 -> autotest_547-896 00:01:50.546 # $HOSTNAME is the actual container id 00:01:50.546 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:50.546 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:50.546 # We can assume this is a mount from a host where container is running, 00:01:50.546 # so fetch its hostname to easily identify the target swarm worker. 00:01:50.546 container="$(< /etc/hostname) ($agent)" 00:01:50.546 else 00:01:50.546 # Fallback 00:01:50.546 container=$agent 00:01:50.546 fi 00:01:50.546 fi 00:01:50.546 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:50.546 00:01:50.818 [Pipeline] } 00:01:50.834 [Pipeline] // withEnv 00:01:50.843 [Pipeline] setCustomBuildProperty 00:01:50.859 [Pipeline] stage 00:01:50.861 [Pipeline] { (Tests) 00:01:50.878 [Pipeline] sh 00:01:51.161 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:51.436 [Pipeline] sh 00:01:51.720 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:51.995 [Pipeline] timeout 00:01:51.995 Timeout set to expire in 50 min 00:01:51.997 [Pipeline] { 00:01:52.013 [Pipeline] sh 00:01:52.295 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:52.864 HEAD is now at c1691a126 bdev: Relocate _bdev_memory_domain_io_get_buf_cb() close to _bdev_io_submit_ext() 00:01:52.877 [Pipeline] sh 00:01:53.161 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:53.434 [Pipeline] sh 00:01:53.715 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:53.991 [Pipeline] sh 00:01:54.273 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:01:54.532 ++ readlink -f spdk_repo 00:01:54.532 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:54.532 + [[ -n /home/vagrant/spdk_repo ]] 00:01:54.532 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:54.532 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:54.532 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:54.532 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:54.532 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:54.532 + [[ nvme-vg-autotest == pkgdep-* ]] 00:01:54.532 + cd /home/vagrant/spdk_repo 00:01:54.532 + source /etc/os-release 00:01:54.532 ++ NAME='Fedora Linux' 00:01:54.532 ++ VERSION='39 (Cloud Edition)' 00:01:54.532 ++ ID=fedora 00:01:54.532 ++ VERSION_ID=39 00:01:54.532 ++ VERSION_CODENAME= 00:01:54.532 ++ PLATFORM_ID=platform:f39 00:01:54.532 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:54.532 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:54.532 ++ LOGO=fedora-logo-icon 00:01:54.532 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:54.532 ++ HOME_URL=https://fedoraproject.org/ 00:01:54.532 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:54.532 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:54.532 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:54.532 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:54.532 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:54.532 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:54.532 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:54.532 ++ SUPPORT_END=2024-11-12 00:01:54.532 ++ VARIANT='Cloud Edition' 00:01:54.532 ++ VARIANT_ID=cloud 00:01:54.532 + uname -a 00:01:54.532 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:54.532 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:54.791 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:55.360 Hugepages 00:01:55.360 node hugesize free / total 00:01:55.360 node0 1048576kB 0 / 0 00:01:55.360 node0 2048kB 0 / 0 00:01:55.360 00:01:55.360 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:55.360 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:55.360 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:55.360 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:01:55.360 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:01:55.360 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:01:55.360 + rm -f /tmp/spdk-ld-path 00:01:55.360 + source autorun-spdk.conf 00:01:55.360 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:55.360 ++ SPDK_TEST_NVME=1 00:01:55.360 ++ SPDK_TEST_FTL=1 00:01:55.360 ++ SPDK_TEST_ISAL=1 00:01:55.360 ++ SPDK_RUN_ASAN=1 00:01:55.360 ++ SPDK_RUN_UBSAN=1 00:01:55.360 ++ SPDK_TEST_XNVME=1 00:01:55.360 ++ SPDK_TEST_NVME_FDP=1 00:01:55.360 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:55.360 ++ RUN_NIGHTLY=0 00:01:55.360 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:55.360 + [[ -n '' ]] 00:01:55.360 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:55.360 + for M in /var/spdk/build-*-manifest.txt 00:01:55.360 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:55.360 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:55.360 + for M in /var/spdk/build-*-manifest.txt 00:01:55.360 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:55.360 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:55.360 + for M in /var/spdk/build-*-manifest.txt 00:01:55.360 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:55.360 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:55.360 ++ uname 00:01:55.360 + [[ Linux == \L\i\n\u\x ]] 00:01:55.360 + sudo dmesg -T 00:01:55.619 + sudo dmesg --clear 00:01:55.620 + dmesg_pid=5238 00:01:55.620 + sudo dmesg -Tw 00:01:55.620 + [[ Fedora Linux == FreeBSD ]] 00:01:55.620 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:55.620 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:55.620 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:55.620 + [[ -x /usr/src/fio-static/fio ]] 00:01:55.620 + export FIO_BIN=/usr/src/fio-static/fio 00:01:55.620 + FIO_BIN=/usr/src/fio-static/fio 00:01:55.620 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:55.620 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:55.620 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:55.620 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:55.620 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:55.620 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:55.620 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:55.620 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:55.620 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:55.620 14:54:56 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:55.620 14:54:56 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:55.620 14:54:56 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:55.620 14:54:56 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:01:55.620 14:54:56 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:01:55.620 14:54:56 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:01:55.620 14:54:56 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:01:55.620 14:54:56 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:01:55.620 14:54:56 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:01:55.620 14:54:56 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:01:55.620 14:54:56 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:55.620 14:54:56 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:01:55.620 14:54:56 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:55.620 14:54:56 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:55.879 14:54:56 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:55.879 14:54:56 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:55.879 14:54:56 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:55.879 14:54:56 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:55.879 14:54:56 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:55.879 14:54:56 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:55.879 14:54:56 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:55.879 14:54:56 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:55.879 14:54:56 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:55.879 14:54:56 -- paths/export.sh@5 -- $ export PATH 00:01:55.879 14:54:56 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:55.879 14:54:56 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:55.879 14:54:56 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:55.879 14:54:56 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732114496.XXXXXX 00:01:55.879 14:54:56 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732114496.aFcEyO 00:01:55.879 14:54:56 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:55.879 14:54:56 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:55.879 14:54:56 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:55.879 14:54:56 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:55.879 14:54:56 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:55.879 14:54:56 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:55.879 14:54:56 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:55.879 14:54:56 -- common/autotest_common.sh@10 -- $ set +x 00:01:55.879 14:54:56 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:01:55.879 14:54:56 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:55.879 14:54:56 -- pm/common@17 -- $ local monitor 00:01:55.879 14:54:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:55.879 14:54:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:55.879 14:54:56 -- pm/common@21 -- $ date +%s 00:01:55.879 14:54:56 -- pm/common@25 -- $ sleep 1 00:01:55.879 14:54:56 -- pm/common@21 -- $ date +%s 00:01:55.879 14:54:56 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732114496 00:01:55.879 14:54:56 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732114496 00:01:55.879 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732114496_collect-cpu-load.pm.log 00:01:55.879 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732114496_collect-vmstat.pm.log 00:01:56.818 14:54:57 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:56.818 14:54:57 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:56.818 14:54:57 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:56.818 14:54:57 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:56.818 14:54:57 -- spdk/autobuild.sh@16 -- $ date -u 00:01:56.818 Wed Nov 20 02:54:57 PM UTC 2024 00:01:56.818 14:54:57 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:56.818 v25.01-pre-226-gc1691a126 00:01:56.818 14:54:57 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:56.818 14:54:57 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:56.818 14:54:57 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:56.818 14:54:57 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:56.818 14:54:57 -- common/autotest_common.sh@10 -- $ set +x 00:01:56.818 ************************************ 00:01:56.818 START TEST asan 00:01:56.818 ************************************ 00:01:56.818 using asan 00:01:56.818 14:54:57 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:56.818 00:01:56.818 real 0m0.001s 00:01:56.818 user 0m0.000s 00:01:56.818 sys 0m0.000s 00:01:56.818 14:54:57 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:56.818 ************************************ 00:01:56.818 END TEST asan 00:01:56.818 ************************************ 00:01:56.818 14:54:57 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:56.818 14:54:57 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:56.818 14:54:57 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:56.818 14:54:57 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:56.818 14:54:57 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:56.818 14:54:57 -- common/autotest_common.sh@10 -- $ set +x 00:01:57.077 ************************************ 00:01:57.077 START TEST ubsan 00:01:57.077 ************************************ 00:01:57.077 using ubsan 00:01:57.077 14:54:57 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:57.077 00:01:57.077 real 0m0.000s 00:01:57.077 user 0m0.000s 00:01:57.077 sys 0m0.000s 00:01:57.077 14:54:57 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:57.077 14:54:57 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:57.077 ************************************ 00:01:57.077 END TEST ubsan 00:01:57.077 ************************************ 00:01:57.077 14:54:57 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:57.077 14:54:57 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:57.077 14:54:57 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:57.077 14:54:57 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:57.077 14:54:57 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:57.077 14:54:57 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:57.077 14:54:57 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:57.077 14:54:57 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:57.077 14:54:57 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:01:57.077 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:57.077 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:57.646 Using 'verbs' RDMA provider 00:02:13.467 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:31.565 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:31.565 Creating mk/config.mk...done. 00:02:31.565 Creating mk/cc.flags.mk...done. 00:02:31.565 Type 'make' to build. 00:02:31.565 14:55:30 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:31.565 14:55:30 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:31.565 14:55:30 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:31.566 14:55:30 -- common/autotest_common.sh@10 -- $ set +x 00:02:31.566 ************************************ 00:02:31.566 START TEST make 00:02:31.566 ************************************ 00:02:31.566 14:55:30 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:31.566 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:31.566 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:31.566 meson setup builddir \ 00:02:31.566 -Dwith-libaio=enabled \ 00:02:31.566 -Dwith-liburing=enabled \ 00:02:31.566 -Dwith-libvfn=disabled \ 00:02:31.566 -Dwith-spdk=disabled \ 00:02:31.566 -Dexamples=false \ 00:02:31.566 -Dtests=false \ 00:02:31.566 -Dtools=false && \ 00:02:31.566 meson compile -C builddir && \ 00:02:31.566 cd -) 00:02:31.566 make[1]: Nothing to be done for 'all'. 00:02:32.503 The Meson build system 00:02:32.503 Version: 1.5.0 00:02:32.503 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:32.503 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:32.503 Build type: native build 00:02:32.503 Project name: xnvme 00:02:32.503 Project version: 0.7.5 00:02:32.504 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:32.504 C linker for the host machine: cc ld.bfd 2.40-14 00:02:32.504 Host machine cpu family: x86_64 00:02:32.504 Host machine cpu: x86_64 00:02:32.504 Message: host_machine.system: linux 00:02:32.504 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:32.504 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:32.504 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:32.504 Run-time dependency threads found: YES 00:02:32.504 Has header "setupapi.h" : NO 00:02:32.504 Has header "linux/blkzoned.h" : YES 00:02:32.504 Has header "linux/blkzoned.h" : YES (cached) 00:02:32.504 Has header "libaio.h" : YES 00:02:32.504 Library aio found: YES 00:02:32.504 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:32.504 Run-time dependency liburing found: YES 2.2 00:02:32.504 Dependency libvfn skipped: feature with-libvfn disabled 00:02:32.504 Found CMake: /usr/bin/cmake (3.27.7) 00:02:32.504 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:02:32.504 Subproject spdk : skipped: feature with-spdk disabled 00:02:32.504 Run-time dependency appleframeworks found: NO (tried framework) 00:02:32.504 Run-time dependency appleframeworks found: NO (tried framework) 00:02:32.504 Library rt found: YES 00:02:32.504 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:32.504 Configuring xnvme_config.h using configuration 00:02:32.504 Configuring xnvme.spec using configuration 00:02:32.504 Run-time dependency bash-completion found: YES 2.11 00:02:32.504 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:32.504 Program cp found: YES (/usr/bin/cp) 00:02:32.504 Build targets in project: 3 00:02:32.504 00:02:32.504 xnvme 0.7.5 00:02:32.504 00:02:32.504 Subprojects 00:02:32.504 spdk : NO Feature 'with-spdk' disabled 00:02:32.504 00:02:32.504 User defined options 00:02:32.504 examples : false 00:02:32.504 tests : false 00:02:32.504 tools : false 00:02:32.504 with-libaio : enabled 00:02:32.504 with-liburing: enabled 00:02:32.504 with-libvfn : disabled 00:02:32.504 with-spdk : disabled 00:02:32.504 00:02:32.504 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:32.763 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:32.763 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:02:33.022 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:02:33.022 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:02:33.022 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:02:33.022 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:02:33.022 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:02:33.022 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:02:33.022 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:02:33.022 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:02:33.022 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:02:33.022 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:02:33.022 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:02:33.022 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:02:33.022 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:02:33.022 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:02:33.022 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:02:33.022 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:02:33.022 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:02:33.022 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:02:33.022 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:02:33.022 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:02:33.022 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:02:33.022 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:02:33.282 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:02:33.282 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:02:33.282 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:02:33.282 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:02:33.282 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:02:33.282 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:02:33.282 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:02:33.282 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:02:33.282 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:02:33.282 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:02:33.282 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:02:33.282 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:02:33.282 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:02:33.282 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:02:33.282 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:02:33.283 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:02:33.283 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:02:33.283 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:02:33.283 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:02:33.283 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:02:33.283 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:02:33.283 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:02:33.283 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:02:33.283 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:02:33.283 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:02:33.283 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:02:33.283 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:02:33.283 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:02:33.283 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:02:33.283 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:02:33.283 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:02:33.283 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:02:33.542 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:02:33.542 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:02:33.542 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:02:33.542 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:02:33.542 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:02:33.542 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:02:33.542 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:02:33.542 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:02:33.542 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:02:33.542 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:02:33.542 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:02:33.542 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:02:33.542 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:02:33.542 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:02:33.542 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:02:33.542 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:02:33.542 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:02:33.542 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:02:34.109 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:02:34.109 [75/76] Linking static target lib/libxnvme.a 00:02:34.109 [76/76] Linking target lib/libxnvme.so.0.7.5 00:02:34.109 INFO: autodetecting backend as ninja 00:02:34.109 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:34.109 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:42.241 The Meson build system 00:02:42.241 Version: 1.5.0 00:02:42.241 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:42.241 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:42.241 Build type: native build 00:02:42.241 Program cat found: YES (/usr/bin/cat) 00:02:42.241 Project name: DPDK 00:02:42.241 Project version: 24.03.0 00:02:42.241 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:42.241 C linker for the host machine: cc ld.bfd 2.40-14 00:02:42.241 Host machine cpu family: x86_64 00:02:42.241 Host machine cpu: x86_64 00:02:42.241 Message: ## Building in Developer Mode ## 00:02:42.241 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:42.241 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:42.241 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:42.241 Program python3 found: YES (/usr/bin/python3) 00:02:42.241 Program cat found: YES (/usr/bin/cat) 00:02:42.241 Compiler for C supports arguments -march=native: YES 00:02:42.241 Checking for size of "void *" : 8 00:02:42.241 Checking for size of "void *" : 8 (cached) 00:02:42.241 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:42.241 Library m found: YES 00:02:42.241 Library numa found: YES 00:02:42.241 Has header "numaif.h" : YES 00:02:42.241 Library fdt found: NO 00:02:42.241 Library execinfo found: NO 00:02:42.241 Has header "execinfo.h" : YES 00:02:42.241 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:42.241 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:42.241 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:42.241 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:42.241 Run-time dependency openssl found: YES 3.1.1 00:02:42.241 Run-time dependency libpcap found: YES 1.10.4 00:02:42.241 Has header "pcap.h" with dependency libpcap: YES 00:02:42.241 Compiler for C supports arguments -Wcast-qual: YES 00:02:42.241 Compiler for C supports arguments -Wdeprecated: YES 00:02:42.241 Compiler for C supports arguments -Wformat: YES 00:02:42.241 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:42.241 Compiler for C supports arguments -Wformat-security: NO 00:02:42.241 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:42.241 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:42.241 Compiler for C supports arguments -Wnested-externs: YES 00:02:42.242 Compiler for C supports arguments -Wold-style-definition: YES 00:02:42.242 Compiler for C supports arguments -Wpointer-arith: YES 00:02:42.242 Compiler for C supports arguments -Wsign-compare: YES 00:02:42.242 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:42.242 Compiler for C supports arguments -Wundef: YES 00:02:42.242 Compiler for C supports arguments -Wwrite-strings: YES 00:02:42.242 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:42.242 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:42.242 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:42.242 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:42.242 Program objdump found: YES (/usr/bin/objdump) 00:02:42.242 Compiler for C supports arguments -mavx512f: YES 00:02:42.242 Checking if "AVX512 checking" compiles: YES 00:02:42.242 Fetching value of define "__SSE4_2__" : 1 00:02:42.242 Fetching value of define "__AES__" : 1 00:02:42.242 Fetching value of define "__AVX__" : 1 00:02:42.242 Fetching value of define "__AVX2__" : 1 00:02:42.242 Fetching value of define "__AVX512BW__" : 1 00:02:42.242 Fetching value of define "__AVX512CD__" : 1 00:02:42.242 Fetching value of define "__AVX512DQ__" : 1 00:02:42.242 Fetching value of define "__AVX512F__" : 1 00:02:42.242 Fetching value of define "__AVX512VL__" : 1 00:02:42.242 Fetching value of define "__PCLMUL__" : 1 00:02:42.242 Fetching value of define "__RDRND__" : 1 00:02:42.242 Fetching value of define "__RDSEED__" : 1 00:02:42.242 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:42.242 Fetching value of define "__znver1__" : (undefined) 00:02:42.242 Fetching value of define "__znver2__" : (undefined) 00:02:42.242 Fetching value of define "__znver3__" : (undefined) 00:02:42.242 Fetching value of define "__znver4__" : (undefined) 00:02:42.242 Library asan found: YES 00:02:42.242 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:42.242 Message: lib/log: Defining dependency "log" 00:02:42.242 Message: lib/kvargs: Defining dependency "kvargs" 00:02:42.242 Message: lib/telemetry: Defining dependency "telemetry" 00:02:42.242 Library rt found: YES 00:02:42.242 Checking for function "getentropy" : NO 00:02:42.242 Message: lib/eal: Defining dependency "eal" 00:02:42.242 Message: lib/ring: Defining dependency "ring" 00:02:42.242 Message: lib/rcu: Defining dependency "rcu" 00:02:42.242 Message: lib/mempool: Defining dependency "mempool" 00:02:42.242 Message: lib/mbuf: Defining dependency "mbuf" 00:02:42.242 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:42.242 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:42.242 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:42.242 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:42.242 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:42.242 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:42.242 Compiler for C supports arguments -mpclmul: YES 00:02:42.242 Compiler for C supports arguments -maes: YES 00:02:42.242 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:42.242 Compiler for C supports arguments -mavx512bw: YES 00:02:42.242 Compiler for C supports arguments -mavx512dq: YES 00:02:42.242 Compiler for C supports arguments -mavx512vl: YES 00:02:42.242 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:42.242 Compiler for C supports arguments -mavx2: YES 00:02:42.242 Compiler for C supports arguments -mavx: YES 00:02:42.242 Message: lib/net: Defining dependency "net" 00:02:42.242 Message: lib/meter: Defining dependency "meter" 00:02:42.242 Message: lib/ethdev: Defining dependency "ethdev" 00:02:42.242 Message: lib/pci: Defining dependency "pci" 00:02:42.242 Message: lib/cmdline: Defining dependency "cmdline" 00:02:42.242 Message: lib/hash: Defining dependency "hash" 00:02:42.242 Message: lib/timer: Defining dependency "timer" 00:02:42.242 Message: lib/compressdev: Defining dependency "compressdev" 00:02:42.242 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:42.242 Message: lib/dmadev: Defining dependency "dmadev" 00:02:42.242 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:42.242 Message: lib/power: Defining dependency "power" 00:02:42.242 Message: lib/reorder: Defining dependency "reorder" 00:02:42.242 Message: lib/security: Defining dependency "security" 00:02:42.242 Has header "linux/userfaultfd.h" : YES 00:02:42.242 Has header "linux/vduse.h" : YES 00:02:42.242 Message: lib/vhost: Defining dependency "vhost" 00:02:42.242 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:42.242 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:42.242 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:42.242 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:42.242 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:42.242 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:42.242 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:42.242 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:42.242 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:42.242 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:42.242 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:42.242 Configuring doxy-api-html.conf using configuration 00:02:42.242 Configuring doxy-api-man.conf using configuration 00:02:42.242 Program mandb found: YES (/usr/bin/mandb) 00:02:42.242 Program sphinx-build found: NO 00:02:42.242 Configuring rte_build_config.h using configuration 00:02:42.242 Message: 00:02:42.242 ================= 00:02:42.242 Applications Enabled 00:02:42.242 ================= 00:02:42.242 00:02:42.242 apps: 00:02:42.242 00:02:42.242 00:02:42.242 Message: 00:02:42.242 ================= 00:02:42.242 Libraries Enabled 00:02:42.242 ================= 00:02:42.242 00:02:42.242 libs: 00:02:42.242 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:42.242 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:42.242 cryptodev, dmadev, power, reorder, security, vhost, 00:02:42.242 00:02:42.242 Message: 00:02:42.242 =============== 00:02:42.242 Drivers Enabled 00:02:42.242 =============== 00:02:42.242 00:02:42.242 common: 00:02:42.242 00:02:42.242 bus: 00:02:42.242 pci, vdev, 00:02:42.242 mempool: 00:02:42.242 ring, 00:02:42.242 dma: 00:02:42.242 00:02:42.242 net: 00:02:42.242 00:02:42.242 crypto: 00:02:42.242 00:02:42.242 compress: 00:02:42.242 00:02:42.242 vdpa: 00:02:42.242 00:02:42.242 00:02:42.242 Message: 00:02:42.242 ================= 00:02:42.242 Content Skipped 00:02:42.242 ================= 00:02:42.242 00:02:42.242 apps: 00:02:42.242 dumpcap: explicitly disabled via build config 00:02:42.242 graph: explicitly disabled via build config 00:02:42.242 pdump: explicitly disabled via build config 00:02:42.242 proc-info: explicitly disabled via build config 00:02:42.242 test-acl: explicitly disabled via build config 00:02:42.242 test-bbdev: explicitly disabled via build config 00:02:42.242 test-cmdline: explicitly disabled via build config 00:02:42.242 test-compress-perf: explicitly disabled via build config 00:02:42.242 test-crypto-perf: explicitly disabled via build config 00:02:42.242 test-dma-perf: explicitly disabled via build config 00:02:42.242 test-eventdev: explicitly disabled via build config 00:02:42.242 test-fib: explicitly disabled via build config 00:02:42.242 test-flow-perf: explicitly disabled via build config 00:02:42.242 test-gpudev: explicitly disabled via build config 00:02:42.242 test-mldev: explicitly disabled via build config 00:02:42.242 test-pipeline: explicitly disabled via build config 00:02:42.242 test-pmd: explicitly disabled via build config 00:02:42.242 test-regex: explicitly disabled via build config 00:02:42.242 test-sad: explicitly disabled via build config 00:02:42.242 test-security-perf: explicitly disabled via build config 00:02:42.242 00:02:42.242 libs: 00:02:42.242 argparse: explicitly disabled via build config 00:02:42.242 metrics: explicitly disabled via build config 00:02:42.242 acl: explicitly disabled via build config 00:02:42.242 bbdev: explicitly disabled via build config 00:02:42.242 bitratestats: explicitly disabled via build config 00:02:42.242 bpf: explicitly disabled via build config 00:02:42.242 cfgfile: explicitly disabled via build config 00:02:42.242 distributor: explicitly disabled via build config 00:02:42.242 efd: explicitly disabled via build config 00:02:42.242 eventdev: explicitly disabled via build config 00:02:42.242 dispatcher: explicitly disabled via build config 00:02:42.242 gpudev: explicitly disabled via build config 00:02:42.242 gro: explicitly disabled via build config 00:02:42.242 gso: explicitly disabled via build config 00:02:42.242 ip_frag: explicitly disabled via build config 00:02:42.242 jobstats: explicitly disabled via build config 00:02:42.242 latencystats: explicitly disabled via build config 00:02:42.242 lpm: explicitly disabled via build config 00:02:42.242 member: explicitly disabled via build config 00:02:42.242 pcapng: explicitly disabled via build config 00:02:42.242 rawdev: explicitly disabled via build config 00:02:42.242 regexdev: explicitly disabled via build config 00:02:42.242 mldev: explicitly disabled via build config 00:02:42.242 rib: explicitly disabled via build config 00:02:42.242 sched: explicitly disabled via build config 00:02:42.242 stack: explicitly disabled via build config 00:02:42.242 ipsec: explicitly disabled via build config 00:02:42.242 pdcp: explicitly disabled via build config 00:02:42.242 fib: explicitly disabled via build config 00:02:42.242 port: explicitly disabled via build config 00:02:42.242 pdump: explicitly disabled via build config 00:02:42.242 table: explicitly disabled via build config 00:02:42.242 pipeline: explicitly disabled via build config 00:02:42.242 graph: explicitly disabled via build config 00:02:42.242 node: explicitly disabled via build config 00:02:42.242 00:02:42.242 drivers: 00:02:42.242 common/cpt: not in enabled drivers build config 00:02:42.242 common/dpaax: not in enabled drivers build config 00:02:42.242 common/iavf: not in enabled drivers build config 00:02:42.242 common/idpf: not in enabled drivers build config 00:02:42.242 common/ionic: not in enabled drivers build config 00:02:42.242 common/mvep: not in enabled drivers build config 00:02:42.242 common/octeontx: not in enabled drivers build config 00:02:42.243 bus/auxiliary: not in enabled drivers build config 00:02:42.243 bus/cdx: not in enabled drivers build config 00:02:42.243 bus/dpaa: not in enabled drivers build config 00:02:42.243 bus/fslmc: not in enabled drivers build config 00:02:42.243 bus/ifpga: not in enabled drivers build config 00:02:42.243 bus/platform: not in enabled drivers build config 00:02:42.243 bus/uacce: not in enabled drivers build config 00:02:42.243 bus/vmbus: not in enabled drivers build config 00:02:42.243 common/cnxk: not in enabled drivers build config 00:02:42.243 common/mlx5: not in enabled drivers build config 00:02:42.243 common/nfp: not in enabled drivers build config 00:02:42.243 common/nitrox: not in enabled drivers build config 00:02:42.243 common/qat: not in enabled drivers build config 00:02:42.243 common/sfc_efx: not in enabled drivers build config 00:02:42.243 mempool/bucket: not in enabled drivers build config 00:02:42.243 mempool/cnxk: not in enabled drivers build config 00:02:42.243 mempool/dpaa: not in enabled drivers build config 00:02:42.243 mempool/dpaa2: not in enabled drivers build config 00:02:42.243 mempool/octeontx: not in enabled drivers build config 00:02:42.243 mempool/stack: not in enabled drivers build config 00:02:42.243 dma/cnxk: not in enabled drivers build config 00:02:42.243 dma/dpaa: not in enabled drivers build config 00:02:42.243 dma/dpaa2: not in enabled drivers build config 00:02:42.243 dma/hisilicon: not in enabled drivers build config 00:02:42.243 dma/idxd: not in enabled drivers build config 00:02:42.243 dma/ioat: not in enabled drivers build config 00:02:42.243 dma/skeleton: not in enabled drivers build config 00:02:42.243 net/af_packet: not in enabled drivers build config 00:02:42.243 net/af_xdp: not in enabled drivers build config 00:02:42.243 net/ark: not in enabled drivers build config 00:02:42.243 net/atlantic: not in enabled drivers build config 00:02:42.243 net/avp: not in enabled drivers build config 00:02:42.243 net/axgbe: not in enabled drivers build config 00:02:42.243 net/bnx2x: not in enabled drivers build config 00:02:42.243 net/bnxt: not in enabled drivers build config 00:02:42.243 net/bonding: not in enabled drivers build config 00:02:42.243 net/cnxk: not in enabled drivers build config 00:02:42.243 net/cpfl: not in enabled drivers build config 00:02:42.243 net/cxgbe: not in enabled drivers build config 00:02:42.243 net/dpaa: not in enabled drivers build config 00:02:42.243 net/dpaa2: not in enabled drivers build config 00:02:42.243 net/e1000: not in enabled drivers build config 00:02:42.243 net/ena: not in enabled drivers build config 00:02:42.243 net/enetc: not in enabled drivers build config 00:02:42.243 net/enetfec: not in enabled drivers build config 00:02:42.243 net/enic: not in enabled drivers build config 00:02:42.243 net/failsafe: not in enabled drivers build config 00:02:42.243 net/fm10k: not in enabled drivers build config 00:02:42.243 net/gve: not in enabled drivers build config 00:02:42.243 net/hinic: not in enabled drivers build config 00:02:42.243 net/hns3: not in enabled drivers build config 00:02:42.243 net/i40e: not in enabled drivers build config 00:02:42.243 net/iavf: not in enabled drivers build config 00:02:42.243 net/ice: not in enabled drivers build config 00:02:42.243 net/idpf: not in enabled drivers build config 00:02:42.243 net/igc: not in enabled drivers build config 00:02:42.243 net/ionic: not in enabled drivers build config 00:02:42.243 net/ipn3ke: not in enabled drivers build config 00:02:42.243 net/ixgbe: not in enabled drivers build config 00:02:42.243 net/mana: not in enabled drivers build config 00:02:42.243 net/memif: not in enabled drivers build config 00:02:42.243 net/mlx4: not in enabled drivers build config 00:02:42.243 net/mlx5: not in enabled drivers build config 00:02:42.243 net/mvneta: not in enabled drivers build config 00:02:42.243 net/mvpp2: not in enabled drivers build config 00:02:42.243 net/netvsc: not in enabled drivers build config 00:02:42.243 net/nfb: not in enabled drivers build config 00:02:42.243 net/nfp: not in enabled drivers build config 00:02:42.243 net/ngbe: not in enabled drivers build config 00:02:42.243 net/null: not in enabled drivers build config 00:02:42.243 net/octeontx: not in enabled drivers build config 00:02:42.243 net/octeon_ep: not in enabled drivers build config 00:02:42.243 net/pcap: not in enabled drivers build config 00:02:42.243 net/pfe: not in enabled drivers build config 00:02:42.243 net/qede: not in enabled drivers build config 00:02:42.243 net/ring: not in enabled drivers build config 00:02:42.243 net/sfc: not in enabled drivers build config 00:02:42.243 net/softnic: not in enabled drivers build config 00:02:42.243 net/tap: not in enabled drivers build config 00:02:42.243 net/thunderx: not in enabled drivers build config 00:02:42.243 net/txgbe: not in enabled drivers build config 00:02:42.243 net/vdev_netvsc: not in enabled drivers build config 00:02:42.243 net/vhost: not in enabled drivers build config 00:02:42.243 net/virtio: not in enabled drivers build config 00:02:42.243 net/vmxnet3: not in enabled drivers build config 00:02:42.243 raw/*: missing internal dependency, "rawdev" 00:02:42.243 crypto/armv8: not in enabled drivers build config 00:02:42.243 crypto/bcmfs: not in enabled drivers build config 00:02:42.243 crypto/caam_jr: not in enabled drivers build config 00:02:42.243 crypto/ccp: not in enabled drivers build config 00:02:42.243 crypto/cnxk: not in enabled drivers build config 00:02:42.243 crypto/dpaa_sec: not in enabled drivers build config 00:02:42.243 crypto/dpaa2_sec: not in enabled drivers build config 00:02:42.243 crypto/ipsec_mb: not in enabled drivers build config 00:02:42.243 crypto/mlx5: not in enabled drivers build config 00:02:42.243 crypto/mvsam: not in enabled drivers build config 00:02:42.243 crypto/nitrox: not in enabled drivers build config 00:02:42.243 crypto/null: not in enabled drivers build config 00:02:42.243 crypto/octeontx: not in enabled drivers build config 00:02:42.243 crypto/openssl: not in enabled drivers build config 00:02:42.243 crypto/scheduler: not in enabled drivers build config 00:02:42.243 crypto/uadk: not in enabled drivers build config 00:02:42.243 crypto/virtio: not in enabled drivers build config 00:02:42.243 compress/isal: not in enabled drivers build config 00:02:42.243 compress/mlx5: not in enabled drivers build config 00:02:42.243 compress/nitrox: not in enabled drivers build config 00:02:42.243 compress/octeontx: not in enabled drivers build config 00:02:42.243 compress/zlib: not in enabled drivers build config 00:02:42.243 regex/*: missing internal dependency, "regexdev" 00:02:42.243 ml/*: missing internal dependency, "mldev" 00:02:42.243 vdpa/ifc: not in enabled drivers build config 00:02:42.243 vdpa/mlx5: not in enabled drivers build config 00:02:42.243 vdpa/nfp: not in enabled drivers build config 00:02:42.243 vdpa/sfc: not in enabled drivers build config 00:02:42.243 event/*: missing internal dependency, "eventdev" 00:02:42.243 baseband/*: missing internal dependency, "bbdev" 00:02:42.243 gpu/*: missing internal dependency, "gpudev" 00:02:42.243 00:02:42.243 00:02:42.243 Build targets in project: 85 00:02:42.243 00:02:42.243 DPDK 24.03.0 00:02:42.243 00:02:42.243 User defined options 00:02:42.243 buildtype : debug 00:02:42.243 default_library : shared 00:02:42.243 libdir : lib 00:02:42.243 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:42.243 b_sanitize : address 00:02:42.243 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:42.243 c_link_args : 00:02:42.243 cpu_instruction_set: native 00:02:42.243 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:42.243 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:42.243 enable_docs : false 00:02:42.243 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:42.243 enable_kmods : false 00:02:42.243 max_lcores : 128 00:02:42.243 tests : false 00:02:42.243 00:02:42.243 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:42.243 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:42.243 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:42.243 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:42.243 [3/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:42.243 [4/268] Linking static target lib/librte_kvargs.a 00:02:42.243 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:42.243 [6/268] Linking static target lib/librte_log.a 00:02:42.243 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:42.520 [8/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:42.520 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:42.520 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:42.520 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:42.520 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:42.520 [13/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.520 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:42.520 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:42.520 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:42.520 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:42.520 [18/268] Linking static target lib/librte_telemetry.a 00:02:43.090 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:43.090 [20/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.090 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:43.090 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:43.090 [23/268] Linking target lib/librte_log.so.24.1 00:02:43.090 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:43.090 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:43.090 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:43.090 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:43.090 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:43.350 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:43.350 [30/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:43.350 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:43.350 [32/268] Linking target lib/librte_kvargs.so.24.1 00:02:43.609 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.609 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:43.609 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:43.609 [36/268] Linking target lib/librte_telemetry.so.24.1 00:02:43.609 [37/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:43.609 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:43.609 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:43.609 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:43.868 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:43.868 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:43.868 [43/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:43.868 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:43.868 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:43.868 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:44.127 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:44.127 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:44.127 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:44.387 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:44.387 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:44.387 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:44.387 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:44.387 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:44.647 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:44.647 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:44.647 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:44.647 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:44.647 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:44.906 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:44.906 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:44.906 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:44.906 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:44.906 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:44.906 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:45.166 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:45.166 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:45.166 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:45.425 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:45.425 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:45.425 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:45.425 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:45.425 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:45.685 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:45.685 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:45.685 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:45.685 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:45.685 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:45.685 [79/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:45.685 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:45.945 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:45.945 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:45.945 [83/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:45.945 [84/268] Linking static target lib/librte_ring.a 00:02:45.945 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:45.945 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:45.945 [87/268] Linking static target lib/librte_eal.a 00:02:46.204 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:46.204 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:46.464 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:46.464 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:46.464 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:46.464 [93/268] Linking static target lib/librte_mempool.a 00:02:46.464 [94/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:46.464 [95/268] Linking static target lib/librte_rcu.a 00:02:46.464 [96/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.464 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:46.464 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:46.723 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:46.723 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:46.723 [101/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:46.982 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:46.982 [103/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.982 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:46.982 [105/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:46.982 [106/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:46.982 [107/268] Linking static target lib/librte_meter.a 00:02:46.982 [108/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:47.242 [109/268] Linking static target lib/librte_mbuf.a 00:02:47.242 [110/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:47.242 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:47.242 [112/268] Linking static target lib/librte_net.a 00:02:47.501 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:47.501 [114/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.501 [115/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.501 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:47.760 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:47.760 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:47.760 [119/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.020 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:48.020 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:48.282 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:48.282 [123/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.282 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:48.282 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:48.540 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:48.540 [127/268] Linking static target lib/librte_pci.a 00:02:48.540 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:48.540 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:48.540 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:48.799 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:48.799 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:48.799 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:48.799 [134/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.799 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:48.799 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:48.799 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:49.058 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:49.058 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:49.058 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:49.058 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:49.058 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:49.058 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:49.058 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:49.058 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:49.058 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:49.317 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:49.317 [148/268] Linking static target lib/librte_cmdline.a 00:02:49.317 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:49.575 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:49.575 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:49.575 [152/268] Linking static target lib/librte_timer.a 00:02:49.575 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:49.575 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:49.835 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:49.835 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:49.835 [157/268] Linking static target lib/librte_ethdev.a 00:02:50.094 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:50.094 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:50.094 [160/268] Linking static target lib/librte_compressdev.a 00:02:50.094 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:50.094 [162/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:50.094 [163/268] Linking static target lib/librte_hash.a 00:02:50.094 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:50.094 [165/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.353 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:50.353 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:50.353 [168/268] Linking static target lib/librte_dmadev.a 00:02:50.613 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:50.613 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:50.613 [171/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:50.613 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:50.872 [173/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:50.872 [174/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.872 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.131 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:51.131 [177/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:51.131 [178/268] Linking static target lib/librte_cryptodev.a 00:02:51.131 [179/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:51.391 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:51.391 [181/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.391 [182/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.391 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:51.391 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:51.391 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:51.726 [186/268] Linking static target lib/librte_power.a 00:02:51.726 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:51.726 [188/268] Linking static target lib/librte_reorder.a 00:02:51.985 [189/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:51.985 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:51.985 [191/268] Linking static target lib/librte_security.a 00:02:51.985 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:52.244 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:52.244 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:52.244 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.504 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.764 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.764 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:52.764 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:53.023 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:53.023 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:53.023 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:53.023 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:53.282 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:53.282 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:53.282 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:53.542 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:53.542 [208/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:53.542 [209/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:53.542 [210/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:53.542 [211/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.801 [212/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:53.801 [213/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:53.801 [214/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:53.801 [215/268] Linking static target drivers/librte_bus_pci.a 00:02:53.801 [216/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:53.801 [217/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:53.801 [218/268] Linking static target drivers/librte_bus_vdev.a 00:02:53.801 [219/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:53.801 [220/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:53.801 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:54.060 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:54.060 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:54.060 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:54.060 [225/268] Linking static target drivers/librte_mempool_ring.a 00:02:54.060 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.319 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.225 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:58.759 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.759 [230/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:58.759 [231/268] Linking static target lib/librte_vhost.a 00:02:58.759 [232/268] Linking target lib/librte_eal.so.24.1 00:02:59.018 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:59.018 [234/268] Linking target lib/librte_ring.so.24.1 00:02:59.018 [235/268] Linking target lib/librte_meter.so.24.1 00:02:59.018 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:59.018 [237/268] Linking target lib/librte_timer.so.24.1 00:02:59.018 [238/268] Linking target lib/librte_pci.so.24.1 00:02:59.018 [239/268] Linking target lib/librte_dmadev.so.24.1 00:02:59.018 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:59.018 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:59.018 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:59.018 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:59.018 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:59.018 [245/268] Linking target lib/librte_rcu.so.24.1 00:02:59.018 [246/268] Linking target lib/librte_mempool.so.24.1 00:02:59.018 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:59.277 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:59.277 [249/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:59.277 [250/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.277 [251/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:59.277 [252/268] Linking target lib/librte_mbuf.so.24.1 00:02:59.536 [253/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:59.537 [254/268] Linking target lib/librte_reorder.so.24.1 00:02:59.537 [255/268] Linking target lib/librte_compressdev.so.24.1 00:02:59.537 [256/268] Linking target lib/librte_net.so.24.1 00:02:59.537 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:02:59.537 [258/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:59.537 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:59.537 [260/268] Linking target lib/librte_cmdline.so.24.1 00:02:59.537 [261/268] Linking target lib/librte_hash.so.24.1 00:02:59.537 [262/268] Linking target lib/librte_security.so.24.1 00:02:59.796 [263/268] Linking target lib/librte_ethdev.so.24.1 00:02:59.796 [264/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:59.796 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:59.796 [266/268] Linking target lib/librte_power.so.24.1 00:03:00.734 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.993 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:00.993 INFO: autodetecting backend as ninja 00:03:00.993 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:19.082 CC lib/ut_mock/mock.o 00:03:19.082 CC lib/log/log.o 00:03:19.082 CC lib/log/log_flags.o 00:03:19.082 CC lib/log/log_deprecated.o 00:03:19.082 CC lib/ut/ut.o 00:03:19.341 LIB libspdk_ut.a 00:03:19.341 LIB libspdk_log.a 00:03:19.341 LIB libspdk_ut_mock.a 00:03:19.341 SO libspdk_ut.so.2.0 00:03:19.341 SO libspdk_log.so.7.1 00:03:19.341 SO libspdk_ut_mock.so.6.0 00:03:19.341 SYMLINK libspdk_ut.so 00:03:19.341 SYMLINK libspdk_log.so 00:03:19.341 SYMLINK libspdk_ut_mock.so 00:03:19.599 CXX lib/trace_parser/trace.o 00:03:19.599 CC lib/dma/dma.o 00:03:19.599 CC lib/util/crc16.o 00:03:19.600 CC lib/util/bit_array.o 00:03:19.600 CC lib/util/cpuset.o 00:03:19.600 CC lib/util/base64.o 00:03:19.600 CC lib/util/crc32c.o 00:03:19.600 CC lib/util/crc32.o 00:03:19.600 CC lib/ioat/ioat.o 00:03:19.859 CC lib/vfio_user/host/vfio_user_pci.o 00:03:19.859 CC lib/util/crc32_ieee.o 00:03:19.859 CC lib/util/crc64.o 00:03:19.859 CC lib/util/dif.o 00:03:19.859 CC lib/vfio_user/host/vfio_user.o 00:03:19.859 LIB libspdk_dma.a 00:03:19.859 CC lib/util/fd.o 00:03:19.859 SO libspdk_dma.so.5.0 00:03:19.859 CC lib/util/fd_group.o 00:03:19.859 CC lib/util/file.o 00:03:19.859 CC lib/util/hexlify.o 00:03:19.859 SYMLINK libspdk_dma.so 00:03:19.859 CC lib/util/iov.o 00:03:19.859 LIB libspdk_ioat.a 00:03:20.118 SO libspdk_ioat.so.7.0 00:03:20.118 CC lib/util/math.o 00:03:20.118 CC lib/util/net.o 00:03:20.118 CC lib/util/pipe.o 00:03:20.118 SYMLINK libspdk_ioat.so 00:03:20.118 CC lib/util/strerror_tls.o 00:03:20.118 CC lib/util/string.o 00:03:20.118 LIB libspdk_vfio_user.a 00:03:20.118 CC lib/util/uuid.o 00:03:20.118 SO libspdk_vfio_user.so.5.0 00:03:20.118 CC lib/util/xor.o 00:03:20.118 CC lib/util/zipf.o 00:03:20.118 SYMLINK libspdk_vfio_user.so 00:03:20.118 CC lib/util/md5.o 00:03:20.378 LIB libspdk_util.a 00:03:20.638 SO libspdk_util.so.10.1 00:03:20.638 LIB libspdk_trace_parser.a 00:03:20.638 SO libspdk_trace_parser.so.6.0 00:03:20.638 SYMLINK libspdk_util.so 00:03:20.897 SYMLINK libspdk_trace_parser.so 00:03:20.897 CC lib/json/json_parse.o 00:03:20.897 CC lib/json/json_write.o 00:03:20.897 CC lib/json/json_util.o 00:03:20.897 CC lib/vmd/vmd.o 00:03:20.897 CC lib/rdma_utils/rdma_utils.o 00:03:20.897 CC lib/env_dpdk/env.o 00:03:20.897 CC lib/vmd/led.o 00:03:20.897 CC lib/env_dpdk/memory.o 00:03:20.897 CC lib/idxd/idxd.o 00:03:20.897 CC lib/conf/conf.o 00:03:21.156 CC lib/idxd/idxd_user.o 00:03:21.156 CC lib/idxd/idxd_kernel.o 00:03:21.156 CC lib/env_dpdk/pci.o 00:03:21.156 LIB libspdk_conf.a 00:03:21.156 SO libspdk_conf.so.6.0 00:03:21.156 LIB libspdk_rdma_utils.a 00:03:21.156 LIB libspdk_json.a 00:03:21.156 SO libspdk_rdma_utils.so.1.0 00:03:21.156 SO libspdk_json.so.6.0 00:03:21.415 SYMLINK libspdk_conf.so 00:03:21.415 CC lib/env_dpdk/init.o 00:03:21.415 SYMLINK libspdk_rdma_utils.so 00:03:21.415 CC lib/env_dpdk/threads.o 00:03:21.415 CC lib/env_dpdk/pci_ioat.o 00:03:21.415 SYMLINK libspdk_json.so 00:03:21.415 CC lib/env_dpdk/pci_virtio.o 00:03:21.415 CC lib/env_dpdk/pci_vmd.o 00:03:21.415 CC lib/env_dpdk/pci_idxd.o 00:03:21.415 CC lib/env_dpdk/pci_event.o 00:03:21.675 CC lib/env_dpdk/sigbus_handler.o 00:03:21.675 CC lib/env_dpdk/pci_dpdk.o 00:03:21.675 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:21.675 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:21.675 CC lib/rdma_provider/common.o 00:03:21.675 CC lib/jsonrpc/jsonrpc_server.o 00:03:21.675 LIB libspdk_idxd.a 00:03:21.675 LIB libspdk_vmd.a 00:03:21.675 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:21.675 SO libspdk_vmd.so.6.0 00:03:21.675 SO libspdk_idxd.so.12.1 00:03:21.675 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:21.675 SYMLINK libspdk_vmd.so 00:03:21.675 CC lib/jsonrpc/jsonrpc_client.o 00:03:21.675 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:21.675 SYMLINK libspdk_idxd.so 00:03:21.934 LIB libspdk_rdma_provider.a 00:03:21.934 SO libspdk_rdma_provider.so.7.0 00:03:21.934 LIB libspdk_jsonrpc.a 00:03:21.934 SYMLINK libspdk_rdma_provider.so 00:03:21.934 SO libspdk_jsonrpc.so.6.0 00:03:22.194 SYMLINK libspdk_jsonrpc.so 00:03:22.455 CC lib/rpc/rpc.o 00:03:22.455 LIB libspdk_env_dpdk.a 00:03:22.715 SO libspdk_env_dpdk.so.15.1 00:03:22.715 LIB libspdk_rpc.a 00:03:22.715 SO libspdk_rpc.so.6.0 00:03:22.715 SYMLINK libspdk_env_dpdk.so 00:03:22.975 SYMLINK libspdk_rpc.so 00:03:23.234 CC lib/notify/notify.o 00:03:23.234 CC lib/notify/notify_rpc.o 00:03:23.234 CC lib/keyring/keyring.o 00:03:23.234 CC lib/trace/trace.o 00:03:23.234 CC lib/trace/trace_flags.o 00:03:23.234 CC lib/trace/trace_rpc.o 00:03:23.234 CC lib/keyring/keyring_rpc.o 00:03:23.494 LIB libspdk_notify.a 00:03:23.494 SO libspdk_notify.so.6.0 00:03:23.494 LIB libspdk_keyring.a 00:03:23.494 SYMLINK libspdk_notify.so 00:03:23.494 LIB libspdk_trace.a 00:03:23.494 SO libspdk_keyring.so.2.0 00:03:23.494 SO libspdk_trace.so.11.0 00:03:23.755 SYMLINK libspdk_keyring.so 00:03:23.755 SYMLINK libspdk_trace.so 00:03:24.015 CC lib/sock/sock.o 00:03:24.015 CC lib/sock/sock_rpc.o 00:03:24.015 CC lib/thread/thread.o 00:03:24.015 CC lib/thread/iobuf.o 00:03:24.585 LIB libspdk_sock.a 00:03:24.585 SO libspdk_sock.so.10.0 00:03:24.585 SYMLINK libspdk_sock.so 00:03:25.154 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:25.154 CC lib/nvme/nvme_ctrlr.o 00:03:25.154 CC lib/nvme/nvme_fabric.o 00:03:25.154 CC lib/nvme/nvme_ns_cmd.o 00:03:25.154 CC lib/nvme/nvme_ns.o 00:03:25.154 CC lib/nvme/nvme_pcie_common.o 00:03:25.154 CC lib/nvme/nvme_pcie.o 00:03:25.154 CC lib/nvme/nvme_qpair.o 00:03:25.154 CC lib/nvme/nvme.o 00:03:25.724 LIB libspdk_thread.a 00:03:25.724 CC lib/nvme/nvme_quirks.o 00:03:25.724 SO libspdk_thread.so.11.0 00:03:25.724 CC lib/nvme/nvme_transport.o 00:03:25.724 CC lib/nvme/nvme_discovery.o 00:03:25.724 SYMLINK libspdk_thread.so 00:03:25.724 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:25.724 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:25.724 CC lib/nvme/nvme_tcp.o 00:03:25.984 CC lib/nvme/nvme_opal.o 00:03:25.984 CC lib/accel/accel.o 00:03:26.243 CC lib/accel/accel_rpc.o 00:03:26.243 CC lib/accel/accel_sw.o 00:03:26.243 CC lib/nvme/nvme_io_msg.o 00:03:26.243 CC lib/nvme/nvme_poll_group.o 00:03:26.503 CC lib/nvme/nvme_zns.o 00:03:26.504 CC lib/blob/blobstore.o 00:03:26.504 CC lib/blob/request.o 00:03:26.504 CC lib/blob/zeroes.o 00:03:26.504 CC lib/init/json_config.o 00:03:26.769 CC lib/init/subsystem.o 00:03:26.769 CC lib/init/subsystem_rpc.o 00:03:26.769 CC lib/init/rpc.o 00:03:26.769 CC lib/virtio/virtio.o 00:03:26.769 CC lib/virtio/virtio_vhost_user.o 00:03:27.041 CC lib/blob/blob_bs_dev.o 00:03:27.041 CC lib/fsdev/fsdev.o 00:03:27.041 CC lib/fsdev/fsdev_io.o 00:03:27.041 LIB libspdk_init.a 00:03:27.041 SO libspdk_init.so.6.0 00:03:27.041 SYMLINK libspdk_init.so 00:03:27.041 CC lib/nvme/nvme_stubs.o 00:03:27.041 LIB libspdk_accel.a 00:03:27.041 CC lib/virtio/virtio_vfio_user.o 00:03:27.300 SO libspdk_accel.so.16.0 00:03:27.300 CC lib/virtio/virtio_pci.o 00:03:27.300 CC lib/nvme/nvme_auth.o 00:03:27.300 SYMLINK libspdk_accel.so 00:03:27.300 CC lib/nvme/nvme_cuse.o 00:03:27.300 CC lib/nvme/nvme_rdma.o 00:03:27.300 CC lib/event/app.o 00:03:27.559 CC lib/event/reactor.o 00:03:27.559 LIB libspdk_virtio.a 00:03:27.559 CC lib/fsdev/fsdev_rpc.o 00:03:27.559 CC lib/bdev/bdev.o 00:03:27.559 SO libspdk_virtio.so.7.0 00:03:27.559 CC lib/bdev/bdev_rpc.o 00:03:27.560 SYMLINK libspdk_virtio.so 00:03:27.560 CC lib/bdev/bdev_zone.o 00:03:27.819 LIB libspdk_fsdev.a 00:03:27.819 SO libspdk_fsdev.so.2.0 00:03:27.819 SYMLINK libspdk_fsdev.so 00:03:27.819 CC lib/bdev/part.o 00:03:27.819 CC lib/bdev/scsi_nvme.o 00:03:27.819 CC lib/event/log_rpc.o 00:03:27.819 CC lib/event/app_rpc.o 00:03:28.078 CC lib/event/scheduler_static.o 00:03:28.078 LIB libspdk_event.a 00:03:28.078 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:28.337 SO libspdk_event.so.14.0 00:03:28.337 SYMLINK libspdk_event.so 00:03:28.596 LIB libspdk_nvme.a 00:03:28.856 LIB libspdk_fuse_dispatcher.a 00:03:28.856 SO libspdk_nvme.so.15.0 00:03:28.856 SO libspdk_fuse_dispatcher.so.1.0 00:03:29.114 SYMLINK libspdk_fuse_dispatcher.so 00:03:29.114 SYMLINK libspdk_nvme.so 00:03:30.051 LIB libspdk_blob.a 00:03:30.051 SO libspdk_blob.so.11.0 00:03:30.310 SYMLINK libspdk_blob.so 00:03:30.310 LIB libspdk_bdev.a 00:03:30.569 SO libspdk_bdev.so.17.0 00:03:30.569 SYMLINK libspdk_bdev.so 00:03:30.569 CC lib/blobfs/tree.o 00:03:30.569 CC lib/blobfs/blobfs.o 00:03:30.569 CC lib/lvol/lvol.o 00:03:30.827 CC lib/scsi/dev.o 00:03:30.827 CC lib/scsi/lun.o 00:03:30.827 CC lib/scsi/port.o 00:03:30.827 CC lib/scsi/scsi.o 00:03:30.827 CC lib/nvmf/ctrlr.o 00:03:30.827 CC lib/nbd/nbd.o 00:03:30.827 CC lib/ftl/ftl_core.o 00:03:30.827 CC lib/ublk/ublk.o 00:03:31.086 CC lib/ublk/ublk_rpc.o 00:03:31.086 CC lib/nbd/nbd_rpc.o 00:03:31.086 CC lib/nvmf/ctrlr_discovery.o 00:03:31.086 CC lib/nvmf/ctrlr_bdev.o 00:03:31.086 CC lib/scsi/scsi_bdev.o 00:03:31.086 CC lib/nvmf/subsystem.o 00:03:31.346 CC lib/ftl/ftl_init.o 00:03:31.346 LIB libspdk_nbd.a 00:03:31.346 SO libspdk_nbd.so.7.0 00:03:31.346 SYMLINK libspdk_nbd.so 00:03:31.346 CC lib/ftl/ftl_layout.o 00:03:31.346 CC lib/scsi/scsi_pr.o 00:03:31.604 LIB libspdk_blobfs.a 00:03:31.604 SO libspdk_blobfs.so.10.0 00:03:31.604 CC lib/ftl/ftl_debug.o 00:03:31.604 LIB libspdk_ublk.a 00:03:31.604 LIB libspdk_lvol.a 00:03:31.604 SO libspdk_ublk.so.3.0 00:03:31.604 SO libspdk_lvol.so.10.0 00:03:31.604 SYMLINK libspdk_blobfs.so 00:03:31.604 CC lib/scsi/scsi_rpc.o 00:03:31.604 CC lib/nvmf/nvmf.o 00:03:31.604 CC lib/nvmf/nvmf_rpc.o 00:03:31.604 SYMLINK libspdk_lvol.so 00:03:31.604 CC lib/nvmf/transport.o 00:03:31.604 SYMLINK libspdk_ublk.so 00:03:31.604 CC lib/nvmf/tcp.o 00:03:31.862 CC lib/nvmf/stubs.o 00:03:31.862 CC lib/scsi/task.o 00:03:31.862 CC lib/ftl/ftl_io.o 00:03:31.862 CC lib/nvmf/mdns_server.o 00:03:32.119 LIB libspdk_scsi.a 00:03:32.119 CC lib/ftl/ftl_sb.o 00:03:32.119 SO libspdk_scsi.so.9.0 00:03:32.119 CC lib/ftl/ftl_l2p.o 00:03:32.377 SYMLINK libspdk_scsi.so 00:03:32.377 CC lib/ftl/ftl_l2p_flat.o 00:03:32.377 CC lib/ftl/ftl_nv_cache.o 00:03:32.377 CC lib/iscsi/conn.o 00:03:32.377 CC lib/ftl/ftl_band.o 00:03:32.377 CC lib/ftl/ftl_band_ops.o 00:03:32.377 CC lib/ftl/ftl_writer.o 00:03:32.377 CC lib/vhost/vhost.o 00:03:32.636 CC lib/vhost/vhost_rpc.o 00:03:32.636 CC lib/vhost/vhost_scsi.o 00:03:32.636 CC lib/vhost/vhost_blk.o 00:03:32.636 CC lib/nvmf/rdma.o 00:03:32.895 CC lib/nvmf/auth.o 00:03:32.895 CC lib/vhost/rte_vhost_user.o 00:03:33.155 CC lib/iscsi/init_grp.o 00:03:33.155 CC lib/iscsi/iscsi.o 00:03:33.414 CC lib/iscsi/param.o 00:03:33.414 CC lib/iscsi/portal_grp.o 00:03:33.414 CC lib/ftl/ftl_rq.o 00:03:33.414 CC lib/iscsi/tgt_node.o 00:03:33.414 CC lib/iscsi/iscsi_subsystem.o 00:03:33.672 CC lib/iscsi/iscsi_rpc.o 00:03:33.672 CC lib/ftl/ftl_reloc.o 00:03:33.672 CC lib/iscsi/task.o 00:03:33.672 CC lib/ftl/ftl_l2p_cache.o 00:03:33.672 CC lib/ftl/ftl_p2l.o 00:03:33.931 CC lib/ftl/ftl_p2l_log.o 00:03:33.931 CC lib/ftl/mngt/ftl_mngt.o 00:03:33.931 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:33.931 LIB libspdk_vhost.a 00:03:33.931 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:34.189 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:34.189 SO libspdk_vhost.so.8.0 00:03:34.189 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:34.189 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:34.189 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:34.189 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:34.189 SYMLINK libspdk_vhost.so 00:03:34.189 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:34.189 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:34.189 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:34.189 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:34.448 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:34.448 CC lib/ftl/utils/ftl_conf.o 00:03:34.448 CC lib/ftl/utils/ftl_md.o 00:03:34.448 CC lib/ftl/utils/ftl_mempool.o 00:03:34.448 CC lib/ftl/utils/ftl_bitmap.o 00:03:34.448 CC lib/ftl/utils/ftl_property.o 00:03:34.448 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:34.707 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:34.707 LIB libspdk_iscsi.a 00:03:34.707 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:34.707 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:34.707 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:34.707 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:34.707 SO libspdk_iscsi.so.8.0 00:03:34.707 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:34.707 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:34.707 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:34.965 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:34.965 SYMLINK libspdk_iscsi.so 00:03:34.965 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:34.965 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:34.965 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:34.965 CC lib/ftl/base/ftl_base_dev.o 00:03:34.965 CC lib/ftl/base/ftl_base_bdev.o 00:03:34.965 CC lib/ftl/ftl_trace.o 00:03:34.965 LIB libspdk_nvmf.a 00:03:35.224 LIB libspdk_ftl.a 00:03:35.224 SO libspdk_nvmf.so.20.0 00:03:35.483 SO libspdk_ftl.so.9.0 00:03:35.483 SYMLINK libspdk_nvmf.so 00:03:35.742 SYMLINK libspdk_ftl.so 00:03:36.309 CC module/env_dpdk/env_dpdk_rpc.o 00:03:36.309 CC module/blob/bdev/blob_bdev.o 00:03:36.309 CC module/accel/error/accel_error.o 00:03:36.309 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:36.309 CC module/fsdev/aio/fsdev_aio.o 00:03:36.309 CC module/accel/dsa/accel_dsa.o 00:03:36.309 CC module/accel/ioat/accel_ioat.o 00:03:36.309 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:36.309 CC module/sock/posix/posix.o 00:03:36.309 CC module/keyring/file/keyring.o 00:03:36.309 LIB libspdk_env_dpdk_rpc.a 00:03:36.309 SO libspdk_env_dpdk_rpc.so.6.0 00:03:36.309 SYMLINK libspdk_env_dpdk_rpc.so 00:03:36.309 CC module/accel/ioat/accel_ioat_rpc.o 00:03:36.309 LIB libspdk_scheduler_dpdk_governor.a 00:03:36.309 CC module/keyring/file/keyring_rpc.o 00:03:36.568 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:36.568 CC module/accel/dsa/accel_dsa_rpc.o 00:03:36.568 LIB libspdk_scheduler_dynamic.a 00:03:36.568 CC module/accel/error/accel_error_rpc.o 00:03:36.568 SO libspdk_scheduler_dynamic.so.4.0 00:03:36.568 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:36.568 LIB libspdk_accel_ioat.a 00:03:36.568 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:36.568 LIB libspdk_blob_bdev.a 00:03:36.568 SO libspdk_accel_ioat.so.6.0 00:03:36.568 SYMLINK libspdk_scheduler_dynamic.so 00:03:36.568 LIB libspdk_keyring_file.a 00:03:36.568 SO libspdk_blob_bdev.so.11.0 00:03:36.568 SO libspdk_keyring_file.so.2.0 00:03:36.568 LIB libspdk_accel_dsa.a 00:03:36.568 SYMLINK libspdk_accel_ioat.so 00:03:36.568 LIB libspdk_accel_error.a 00:03:36.568 SO libspdk_accel_dsa.so.5.0 00:03:36.568 SYMLINK libspdk_blob_bdev.so 00:03:36.568 CC module/fsdev/aio/linux_aio_mgr.o 00:03:36.568 SO libspdk_accel_error.so.2.0 00:03:36.568 SYMLINK libspdk_keyring_file.so 00:03:36.568 CC module/accel/iaa/accel_iaa.o 00:03:36.568 CC module/accel/iaa/accel_iaa_rpc.o 00:03:36.568 SYMLINK libspdk_accel_dsa.so 00:03:36.827 CC module/scheduler/gscheduler/gscheduler.o 00:03:36.827 SYMLINK libspdk_accel_error.so 00:03:36.827 CC module/keyring/linux/keyring.o 00:03:36.827 CC module/keyring/linux/keyring_rpc.o 00:03:36.827 LIB libspdk_scheduler_gscheduler.a 00:03:36.827 LIB libspdk_accel_iaa.a 00:03:36.827 SO libspdk_scheduler_gscheduler.so.4.0 00:03:36.827 SO libspdk_accel_iaa.so.3.0 00:03:36.827 CC module/bdev/delay/vbdev_delay.o 00:03:36.827 CC module/bdev/error/vbdev_error.o 00:03:36.827 CC module/blobfs/bdev/blobfs_bdev.o 00:03:36.827 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:36.827 SYMLINK libspdk_scheduler_gscheduler.so 00:03:36.827 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:37.085 CC module/bdev/gpt/gpt.o 00:03:37.085 LIB libspdk_keyring_linux.a 00:03:37.085 SYMLINK libspdk_accel_iaa.so 00:03:37.085 CC module/bdev/error/vbdev_error_rpc.o 00:03:37.085 SO libspdk_keyring_linux.so.1.0 00:03:37.085 LIB libspdk_fsdev_aio.a 00:03:37.085 SYMLINK libspdk_keyring_linux.so 00:03:37.085 SO libspdk_fsdev_aio.so.1.0 00:03:37.085 CC module/bdev/gpt/vbdev_gpt.o 00:03:37.085 LIB libspdk_blobfs_bdev.a 00:03:37.085 LIB libspdk_sock_posix.a 00:03:37.085 SO libspdk_blobfs_bdev.so.6.0 00:03:37.085 SYMLINK libspdk_fsdev_aio.so 00:03:37.085 SO libspdk_sock_posix.so.6.0 00:03:37.085 LIB libspdk_bdev_error.a 00:03:37.344 SYMLINK libspdk_blobfs_bdev.so 00:03:37.344 SO libspdk_bdev_error.so.6.0 00:03:37.344 SYMLINK libspdk_sock_posix.so 00:03:37.344 CC module/bdev/lvol/vbdev_lvol.o 00:03:37.344 LIB libspdk_bdev_delay.a 00:03:37.344 SYMLINK libspdk_bdev_error.so 00:03:37.344 SO libspdk_bdev_delay.so.6.0 00:03:37.344 CC module/bdev/malloc/bdev_malloc.o 00:03:37.344 CC module/bdev/null/bdev_null.o 00:03:37.344 CC module/bdev/nvme/bdev_nvme.o 00:03:37.344 LIB libspdk_bdev_gpt.a 00:03:37.344 CC module/bdev/passthru/vbdev_passthru.o 00:03:37.344 CC module/bdev/raid/bdev_raid.o 00:03:37.344 SYMLINK libspdk_bdev_delay.so 00:03:37.344 CC module/bdev/raid/bdev_raid_rpc.o 00:03:37.344 SO libspdk_bdev_gpt.so.6.0 00:03:37.344 CC module/bdev/split/vbdev_split.o 00:03:37.602 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:37.602 SYMLINK libspdk_bdev_gpt.so 00:03:37.602 CC module/bdev/raid/bdev_raid_sb.o 00:03:37.602 CC module/bdev/null/bdev_null_rpc.o 00:03:37.602 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:37.602 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:37.602 CC module/bdev/split/vbdev_split_rpc.o 00:03:37.602 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:37.861 LIB libspdk_bdev_null.a 00:03:37.861 CC module/bdev/nvme/nvme_rpc.o 00:03:37.861 SO libspdk_bdev_null.so.6.0 00:03:37.861 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:37.861 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:37.861 LIB libspdk_bdev_split.a 00:03:37.861 LIB libspdk_bdev_passthru.a 00:03:37.861 SYMLINK libspdk_bdev_null.so 00:03:37.861 LIB libspdk_bdev_malloc.a 00:03:37.861 SO libspdk_bdev_split.so.6.0 00:03:37.861 SO libspdk_bdev_passthru.so.6.0 00:03:37.861 SO libspdk_bdev_malloc.so.6.0 00:03:37.861 SYMLINK libspdk_bdev_passthru.so 00:03:37.861 SYMLINK libspdk_bdev_split.so 00:03:38.120 LIB libspdk_bdev_zone_block.a 00:03:38.120 CC module/bdev/raid/raid0.o 00:03:38.120 SYMLINK libspdk_bdev_malloc.so 00:03:38.120 CC module/bdev/raid/raid1.o 00:03:38.120 SO libspdk_bdev_zone_block.so.6.0 00:03:38.120 CC module/bdev/raid/concat.o 00:03:38.120 CC module/bdev/xnvme/bdev_xnvme.o 00:03:38.120 SYMLINK libspdk_bdev_zone_block.so 00:03:38.120 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:38.120 CC module/bdev/aio/bdev_aio.o 00:03:38.380 LIB libspdk_bdev_lvol.a 00:03:38.380 CC module/bdev/aio/bdev_aio_rpc.o 00:03:38.380 SO libspdk_bdev_lvol.so.6.0 00:03:38.380 CC module/bdev/nvme/bdev_mdns_client.o 00:03:38.380 CC module/bdev/nvme/vbdev_opal.o 00:03:38.380 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:38.380 SYMLINK libspdk_bdev_lvol.so 00:03:38.380 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:38.380 LIB libspdk_bdev_xnvme.a 00:03:38.380 SO libspdk_bdev_xnvme.so.3.0 00:03:38.380 LIB libspdk_bdev_raid.a 00:03:38.380 CC module/bdev/ftl/bdev_ftl.o 00:03:38.638 SO libspdk_bdev_raid.so.6.0 00:03:38.638 SYMLINK libspdk_bdev_xnvme.so 00:03:38.638 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:38.638 LIB libspdk_bdev_aio.a 00:03:38.638 CC module/bdev/iscsi/bdev_iscsi.o 00:03:38.638 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:38.638 SYMLINK libspdk_bdev_raid.so 00:03:38.638 SO libspdk_bdev_aio.so.6.0 00:03:38.638 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:38.638 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:38.638 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:38.897 SYMLINK libspdk_bdev_aio.so 00:03:38.897 LIB libspdk_bdev_ftl.a 00:03:38.897 SO libspdk_bdev_ftl.so.6.0 00:03:38.897 SYMLINK libspdk_bdev_ftl.so 00:03:39.156 LIB libspdk_bdev_iscsi.a 00:03:39.156 SO libspdk_bdev_iscsi.so.6.0 00:03:39.416 SYMLINK libspdk_bdev_iscsi.so 00:03:39.416 LIB libspdk_bdev_virtio.a 00:03:39.416 SO libspdk_bdev_virtio.so.6.0 00:03:39.675 SYMLINK libspdk_bdev_virtio.so 00:03:40.611 LIB libspdk_bdev_nvme.a 00:03:40.611 SO libspdk_bdev_nvme.so.7.1 00:03:40.611 SYMLINK libspdk_bdev_nvme.so 00:03:41.180 CC module/event/subsystems/iobuf/iobuf.o 00:03:41.180 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:41.180 CC module/event/subsystems/scheduler/scheduler.o 00:03:41.180 CC module/event/subsystems/sock/sock.o 00:03:41.180 CC module/event/subsystems/vmd/vmd.o 00:03:41.180 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:41.180 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:41.180 CC module/event/subsystems/keyring/keyring.o 00:03:41.180 CC module/event/subsystems/fsdev/fsdev.o 00:03:41.438 LIB libspdk_event_vmd.a 00:03:41.439 LIB libspdk_event_sock.a 00:03:41.439 LIB libspdk_event_keyring.a 00:03:41.439 LIB libspdk_event_iobuf.a 00:03:41.439 LIB libspdk_event_vhost_blk.a 00:03:41.439 LIB libspdk_event_scheduler.a 00:03:41.439 LIB libspdk_event_fsdev.a 00:03:41.439 SO libspdk_event_sock.so.5.0 00:03:41.439 SO libspdk_event_keyring.so.1.0 00:03:41.439 SO libspdk_event_vmd.so.6.0 00:03:41.439 SO libspdk_event_vhost_blk.so.3.0 00:03:41.439 SO libspdk_event_scheduler.so.4.0 00:03:41.439 SO libspdk_event_iobuf.so.3.0 00:03:41.439 SO libspdk_event_fsdev.so.1.0 00:03:41.439 SYMLINK libspdk_event_sock.so 00:03:41.439 SYMLINK libspdk_event_keyring.so 00:03:41.439 SYMLINK libspdk_event_vmd.so 00:03:41.439 SYMLINK libspdk_event_vhost_blk.so 00:03:41.439 SYMLINK libspdk_event_scheduler.so 00:03:41.439 SYMLINK libspdk_event_fsdev.so 00:03:41.439 SYMLINK libspdk_event_iobuf.so 00:03:42.006 CC module/event/subsystems/accel/accel.o 00:03:42.006 LIB libspdk_event_accel.a 00:03:42.006 SO libspdk_event_accel.so.6.0 00:03:42.006 SYMLINK libspdk_event_accel.so 00:03:42.574 CC module/event/subsystems/bdev/bdev.o 00:03:42.574 LIB libspdk_event_bdev.a 00:03:42.833 SO libspdk_event_bdev.so.6.0 00:03:42.833 SYMLINK libspdk_event_bdev.so 00:03:43.093 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:43.093 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:43.093 CC module/event/subsystems/ublk/ublk.o 00:03:43.093 CC module/event/subsystems/nbd/nbd.o 00:03:43.093 CC module/event/subsystems/scsi/scsi.o 00:03:43.353 LIB libspdk_event_nbd.a 00:03:43.353 LIB libspdk_event_ublk.a 00:03:43.353 LIB libspdk_event_scsi.a 00:03:43.353 SO libspdk_event_nbd.so.6.0 00:03:43.353 SO libspdk_event_ublk.so.3.0 00:03:43.353 SO libspdk_event_scsi.so.6.0 00:03:43.353 LIB libspdk_event_nvmf.a 00:03:43.353 SYMLINK libspdk_event_ublk.so 00:03:43.353 SYMLINK libspdk_event_nbd.so 00:03:43.353 SYMLINK libspdk_event_scsi.so 00:03:43.353 SO libspdk_event_nvmf.so.6.0 00:03:43.612 SYMLINK libspdk_event_nvmf.so 00:03:43.871 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:43.871 CC module/event/subsystems/iscsi/iscsi.o 00:03:43.871 LIB libspdk_event_vhost_scsi.a 00:03:43.871 LIB libspdk_event_iscsi.a 00:03:43.871 SO libspdk_event_vhost_scsi.so.3.0 00:03:43.871 SO libspdk_event_iscsi.so.6.0 00:03:44.129 SYMLINK libspdk_event_vhost_scsi.so 00:03:44.129 SYMLINK libspdk_event_iscsi.so 00:03:44.387 SO libspdk.so.6.0 00:03:44.387 SYMLINK libspdk.so 00:03:44.646 CXX app/trace/trace.o 00:03:44.646 CC app/spdk_lspci/spdk_lspci.o 00:03:44.646 CC app/trace_record/trace_record.o 00:03:44.647 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:44.647 CC app/iscsi_tgt/iscsi_tgt.o 00:03:44.647 CC app/nvmf_tgt/nvmf_main.o 00:03:44.647 CC app/spdk_tgt/spdk_tgt.o 00:03:44.647 CC examples/ioat/perf/perf.o 00:03:44.647 CC examples/util/zipf/zipf.o 00:03:44.647 CC test/thread/poller_perf/poller_perf.o 00:03:44.647 LINK spdk_lspci 00:03:44.647 LINK interrupt_tgt 00:03:44.647 LINK poller_perf 00:03:44.906 LINK nvmf_tgt 00:03:44.906 LINK iscsi_tgt 00:03:44.906 LINK spdk_trace_record 00:03:44.906 LINK spdk_tgt 00:03:44.906 LINK zipf 00:03:44.906 LINK ioat_perf 00:03:44.906 LINK spdk_trace 00:03:44.906 TEST_HEADER include/spdk/accel.h 00:03:44.906 TEST_HEADER include/spdk/accel_module.h 00:03:44.906 TEST_HEADER include/spdk/assert.h 00:03:44.906 TEST_HEADER include/spdk/barrier.h 00:03:44.906 TEST_HEADER include/spdk/base64.h 00:03:44.906 TEST_HEADER include/spdk/bdev.h 00:03:44.906 TEST_HEADER include/spdk/bdev_module.h 00:03:44.906 TEST_HEADER include/spdk/bdev_zone.h 00:03:44.906 TEST_HEADER include/spdk/bit_array.h 00:03:44.906 TEST_HEADER include/spdk/bit_pool.h 00:03:45.165 TEST_HEADER include/spdk/blob_bdev.h 00:03:45.165 CC test/dma/test_dma/test_dma.o 00:03:45.165 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:45.165 TEST_HEADER include/spdk/blobfs.h 00:03:45.165 TEST_HEADER include/spdk/blob.h 00:03:45.165 TEST_HEADER include/spdk/conf.h 00:03:45.165 TEST_HEADER include/spdk/config.h 00:03:45.165 TEST_HEADER include/spdk/cpuset.h 00:03:45.165 TEST_HEADER include/spdk/crc16.h 00:03:45.165 TEST_HEADER include/spdk/crc32.h 00:03:45.165 TEST_HEADER include/spdk/crc64.h 00:03:45.165 TEST_HEADER include/spdk/dif.h 00:03:45.165 TEST_HEADER include/spdk/dma.h 00:03:45.165 TEST_HEADER include/spdk/endian.h 00:03:45.165 TEST_HEADER include/spdk/env_dpdk.h 00:03:45.165 TEST_HEADER include/spdk/env.h 00:03:45.165 TEST_HEADER include/spdk/event.h 00:03:45.165 CC examples/ioat/verify/verify.o 00:03:45.165 CC app/spdk_nvme_perf/perf.o 00:03:45.165 TEST_HEADER include/spdk/fd_group.h 00:03:45.165 TEST_HEADER include/spdk/fd.h 00:03:45.165 TEST_HEADER include/spdk/file.h 00:03:45.165 TEST_HEADER include/spdk/fsdev.h 00:03:45.165 TEST_HEADER include/spdk/fsdev_module.h 00:03:45.165 TEST_HEADER include/spdk/ftl.h 00:03:45.165 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:45.165 TEST_HEADER include/spdk/gpt_spec.h 00:03:45.165 TEST_HEADER include/spdk/hexlify.h 00:03:45.165 TEST_HEADER include/spdk/histogram_data.h 00:03:45.165 TEST_HEADER include/spdk/idxd.h 00:03:45.165 TEST_HEADER include/spdk/idxd_spec.h 00:03:45.165 TEST_HEADER include/spdk/init.h 00:03:45.165 TEST_HEADER include/spdk/ioat.h 00:03:45.165 TEST_HEADER include/spdk/ioat_spec.h 00:03:45.165 TEST_HEADER include/spdk/iscsi_spec.h 00:03:45.165 TEST_HEADER include/spdk/json.h 00:03:45.165 TEST_HEADER include/spdk/jsonrpc.h 00:03:45.165 TEST_HEADER include/spdk/keyring.h 00:03:45.165 TEST_HEADER include/spdk/keyring_module.h 00:03:45.165 TEST_HEADER include/spdk/likely.h 00:03:45.165 TEST_HEADER include/spdk/log.h 00:03:45.165 TEST_HEADER include/spdk/lvol.h 00:03:45.165 TEST_HEADER include/spdk/md5.h 00:03:45.165 CC test/app/bdev_svc/bdev_svc.o 00:03:45.165 TEST_HEADER include/spdk/memory.h 00:03:45.165 TEST_HEADER include/spdk/mmio.h 00:03:45.165 TEST_HEADER include/spdk/nbd.h 00:03:45.165 TEST_HEADER include/spdk/net.h 00:03:45.165 TEST_HEADER include/spdk/notify.h 00:03:45.165 TEST_HEADER include/spdk/nvme.h 00:03:45.165 TEST_HEADER include/spdk/nvme_intel.h 00:03:45.165 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:45.165 CC test/event/event_perf/event_perf.o 00:03:45.165 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:45.165 TEST_HEADER include/spdk/nvme_spec.h 00:03:45.165 TEST_HEADER include/spdk/nvme_zns.h 00:03:45.165 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:45.165 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:45.165 TEST_HEADER include/spdk/nvmf.h 00:03:45.165 TEST_HEADER include/spdk/nvmf_spec.h 00:03:45.165 TEST_HEADER include/spdk/nvmf_transport.h 00:03:45.165 TEST_HEADER include/spdk/opal.h 00:03:45.165 TEST_HEADER include/spdk/opal_spec.h 00:03:45.165 TEST_HEADER include/spdk/pci_ids.h 00:03:45.165 TEST_HEADER include/spdk/pipe.h 00:03:45.165 TEST_HEADER include/spdk/queue.h 00:03:45.165 TEST_HEADER include/spdk/reduce.h 00:03:45.165 TEST_HEADER include/spdk/rpc.h 00:03:45.165 TEST_HEADER include/spdk/scheduler.h 00:03:45.165 TEST_HEADER include/spdk/scsi.h 00:03:45.165 TEST_HEADER include/spdk/scsi_spec.h 00:03:45.165 TEST_HEADER include/spdk/sock.h 00:03:45.165 TEST_HEADER include/spdk/stdinc.h 00:03:45.165 TEST_HEADER include/spdk/string.h 00:03:45.165 TEST_HEADER include/spdk/thread.h 00:03:45.165 TEST_HEADER include/spdk/trace.h 00:03:45.165 TEST_HEADER include/spdk/trace_parser.h 00:03:45.165 CC test/env/mem_callbacks/mem_callbacks.o 00:03:45.165 TEST_HEADER include/spdk/tree.h 00:03:45.165 TEST_HEADER include/spdk/ublk.h 00:03:45.165 CC examples/sock/hello_world/hello_sock.o 00:03:45.165 TEST_HEADER include/spdk/util.h 00:03:45.165 TEST_HEADER include/spdk/uuid.h 00:03:45.165 TEST_HEADER include/spdk/version.h 00:03:45.165 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:45.165 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:45.165 TEST_HEADER include/spdk/vhost.h 00:03:45.165 TEST_HEADER include/spdk/vmd.h 00:03:45.165 TEST_HEADER include/spdk/xor.h 00:03:45.165 TEST_HEADER include/spdk/zipf.h 00:03:45.165 CC examples/thread/thread/thread_ex.o 00:03:45.165 CXX test/cpp_headers/accel.o 00:03:45.165 CC examples/vmd/lsvmd/lsvmd.o 00:03:45.165 LINK event_perf 00:03:45.165 LINK bdev_svc 00:03:45.425 LINK verify 00:03:45.425 LINK lsvmd 00:03:45.425 CXX test/cpp_headers/accel_module.o 00:03:45.425 LINK hello_sock 00:03:45.425 LINK thread 00:03:45.425 LINK test_dma 00:03:45.684 CXX test/cpp_headers/assert.o 00:03:45.684 CC test/event/reactor/reactor.o 00:03:45.684 CC test/app/histogram_perf/histogram_perf.o 00:03:45.684 CC examples/vmd/led/led.o 00:03:45.684 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:45.684 LINK mem_callbacks 00:03:45.684 CC test/app/jsoncat/jsoncat.o 00:03:45.685 CXX test/cpp_headers/barrier.o 00:03:45.685 LINK reactor 00:03:45.685 CC test/app/stub/stub.o 00:03:45.685 LINK histogram_perf 00:03:45.685 LINK led 00:03:45.944 LINK jsoncat 00:03:45.944 CXX test/cpp_headers/base64.o 00:03:45.944 LINK stub 00:03:45.944 CC examples/idxd/perf/perf.o 00:03:45.944 CC test/env/vtophys/vtophys.o 00:03:45.944 LINK spdk_nvme_perf 00:03:45.944 CXX test/cpp_headers/bdev.o 00:03:45.944 CC test/event/reactor_perf/reactor_perf.o 00:03:45.944 CXX test/cpp_headers/bdev_module.o 00:03:46.203 CXX test/cpp_headers/bdev_zone.o 00:03:46.203 LINK nvme_fuzz 00:03:46.203 LINK reactor_perf 00:03:46.203 LINK vtophys 00:03:46.203 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:46.203 CC app/spdk_nvme_identify/identify.o 00:03:46.203 CC examples/accel/perf/accel_perf.o 00:03:46.203 CXX test/cpp_headers/bit_array.o 00:03:46.463 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:46.463 CC examples/nvme/hello_world/hello_world.o 00:03:46.463 CC examples/blob/hello_world/hello_blob.o 00:03:46.463 CC test/event/app_repeat/app_repeat.o 00:03:46.463 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:46.463 LINK idxd_perf 00:03:46.463 LINK hello_fsdev 00:03:46.463 CXX test/cpp_headers/bit_pool.o 00:03:46.463 LINK env_dpdk_post_init 00:03:46.463 LINK app_repeat 00:03:46.722 LINK hello_blob 00:03:46.723 LINK hello_world 00:03:46.723 CC examples/nvme/reconnect/reconnect.o 00:03:46.723 CXX test/cpp_headers/blob_bdev.o 00:03:46.723 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:46.723 CC test/env/memory/memory_ut.o 00:03:46.723 LINK accel_perf 00:03:46.723 CXX test/cpp_headers/blobfs_bdev.o 00:03:46.723 CC test/event/scheduler/scheduler.o 00:03:46.981 CC test/rpc_client/rpc_client_test.o 00:03:46.981 CC examples/blob/cli/blobcli.o 00:03:46.981 CXX test/cpp_headers/blobfs.o 00:03:46.981 LINK reconnect 00:03:46.981 LINK scheduler 00:03:46.981 LINK rpc_client_test 00:03:46.981 LINK spdk_nvme_identify 00:03:47.240 CXX test/cpp_headers/blob.o 00:03:47.240 CC test/accel/dif/dif.o 00:03:47.240 CXX test/cpp_headers/conf.o 00:03:47.240 CC examples/nvme/arbitration/arbitration.o 00:03:47.240 LINK nvme_manage 00:03:47.240 CC examples/nvme/hotplug/hotplug.o 00:03:47.240 CC app/spdk_nvme_discover/discovery_aer.o 00:03:47.500 CXX test/cpp_headers/config.o 00:03:47.500 CC test/env/pci/pci_ut.o 00:03:47.500 CXX test/cpp_headers/cpuset.o 00:03:47.500 LINK blobcli 00:03:47.500 CXX test/cpp_headers/crc16.o 00:03:47.500 LINK hotplug 00:03:47.500 LINK spdk_nvme_discover 00:03:47.500 LINK arbitration 00:03:47.500 CXX test/cpp_headers/crc32.o 00:03:47.500 CXX test/cpp_headers/crc64.o 00:03:47.759 CXX test/cpp_headers/dif.o 00:03:47.759 CC test/blobfs/mkfs/mkfs.o 00:03:47.759 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:47.759 CC app/spdk_top/spdk_top.o 00:03:47.759 LINK pci_ut 00:03:47.759 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:47.759 LINK memory_ut 00:03:48.018 CXX test/cpp_headers/dma.o 00:03:48.018 LINK dif 00:03:48.018 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:48.018 LINK cmb_copy 00:03:48.018 LINK mkfs 00:03:48.018 CC test/lvol/esnap/esnap.o 00:03:48.018 CXX test/cpp_headers/endian.o 00:03:48.277 CC app/vhost/vhost.o 00:03:48.277 CC app/spdk_dd/spdk_dd.o 00:03:48.277 CC examples/nvme/abort/abort.o 00:03:48.277 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:48.277 LINK iscsi_fuzz 00:03:48.277 CXX test/cpp_headers/env_dpdk.o 00:03:48.277 CC app/fio/nvme/fio_plugin.o 00:03:48.277 LINK vhost 00:03:48.535 CXX test/cpp_headers/env.o 00:03:48.535 LINK pmr_persistence 00:03:48.536 LINK vhost_fuzz 00:03:48.536 CXX test/cpp_headers/event.o 00:03:48.536 LINK spdk_dd 00:03:48.536 LINK abort 00:03:48.536 CXX test/cpp_headers/fd_group.o 00:03:48.795 CC app/fio/bdev/fio_plugin.o 00:03:48.795 LINK spdk_top 00:03:48.795 CC test/nvme/aer/aer.o 00:03:48.795 CXX test/cpp_headers/fd.o 00:03:48.795 CC test/bdev/bdevio/bdevio.o 00:03:48.795 CC examples/bdev/hello_world/hello_bdev.o 00:03:48.795 CC examples/bdev/bdevperf/bdevperf.o 00:03:48.795 LINK spdk_nvme 00:03:49.055 CC test/nvme/reset/reset.o 00:03:49.055 CC test/nvme/sgl/sgl.o 00:03:49.055 CXX test/cpp_headers/file.o 00:03:49.055 CC test/nvme/e2edp/nvme_dp.o 00:03:49.055 LINK aer 00:03:49.055 LINK hello_bdev 00:03:49.055 CXX test/cpp_headers/fsdev.o 00:03:49.315 LINK reset 00:03:49.315 LINK spdk_bdev 00:03:49.315 LINK sgl 00:03:49.315 CXX test/cpp_headers/fsdev_module.o 00:03:49.315 LINK bdevio 00:03:49.315 CXX test/cpp_headers/ftl.o 00:03:49.315 CC test/nvme/overhead/overhead.o 00:03:49.315 LINK nvme_dp 00:03:49.315 CC test/nvme/err_injection/err_injection.o 00:03:49.574 CC test/nvme/startup/startup.o 00:03:49.574 CXX test/cpp_headers/fuse_dispatcher.o 00:03:49.574 CC test/nvme/reserve/reserve.o 00:03:49.574 CXX test/cpp_headers/gpt_spec.o 00:03:49.574 CC test/nvme/simple_copy/simple_copy.o 00:03:49.574 LINK err_injection 00:03:49.574 CC test/nvme/connect_stress/connect_stress.o 00:03:49.574 LINK overhead 00:03:49.837 LINK startup 00:03:49.837 LINK bdevperf 00:03:49.837 CXX test/cpp_headers/hexlify.o 00:03:49.837 LINK reserve 00:03:49.837 CC test/nvme/boot_partition/boot_partition.o 00:03:49.837 LINK simple_copy 00:03:49.837 CXX test/cpp_headers/histogram_data.o 00:03:49.837 LINK connect_stress 00:03:49.837 CC test/nvme/compliance/nvme_compliance.o 00:03:49.837 CXX test/cpp_headers/idxd.o 00:03:49.837 CXX test/cpp_headers/idxd_spec.o 00:03:49.837 CXX test/cpp_headers/init.o 00:03:50.102 LINK boot_partition 00:03:50.102 CXX test/cpp_headers/ioat.o 00:03:50.102 CXX test/cpp_headers/ioat_spec.o 00:03:50.102 CC test/nvme/fused_ordering/fused_ordering.o 00:03:50.102 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:50.102 CXX test/cpp_headers/iscsi_spec.o 00:03:50.102 CC test/nvme/fdp/fdp.o 00:03:50.362 CC examples/nvmf/nvmf/nvmf.o 00:03:50.362 CXX test/cpp_headers/json.o 00:03:50.362 CXX test/cpp_headers/jsonrpc.o 00:03:50.362 CXX test/cpp_headers/keyring.o 00:03:50.362 LINK nvme_compliance 00:03:50.362 CC test/nvme/cuse/cuse.o 00:03:50.362 LINK fused_ordering 00:03:50.363 LINK doorbell_aers 00:03:50.363 CXX test/cpp_headers/keyring_module.o 00:03:50.363 CXX test/cpp_headers/likely.o 00:03:50.363 CXX test/cpp_headers/log.o 00:03:50.363 CXX test/cpp_headers/lvol.o 00:03:50.363 CXX test/cpp_headers/md5.o 00:03:50.622 CXX test/cpp_headers/memory.o 00:03:50.622 LINK nvmf 00:03:50.622 LINK fdp 00:03:50.622 CXX test/cpp_headers/mmio.o 00:03:50.622 CXX test/cpp_headers/nbd.o 00:03:50.622 CXX test/cpp_headers/net.o 00:03:50.622 CXX test/cpp_headers/notify.o 00:03:50.622 CXX test/cpp_headers/nvme.o 00:03:50.622 CXX test/cpp_headers/nvme_intel.o 00:03:50.622 CXX test/cpp_headers/nvme_ocssd.o 00:03:50.622 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:50.622 CXX test/cpp_headers/nvme_spec.o 00:03:50.622 CXX test/cpp_headers/nvme_zns.o 00:03:50.881 CXX test/cpp_headers/nvmf_cmd.o 00:03:50.881 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:50.881 CXX test/cpp_headers/nvmf.o 00:03:50.881 CXX test/cpp_headers/nvmf_spec.o 00:03:50.881 CXX test/cpp_headers/nvmf_transport.o 00:03:50.881 CXX test/cpp_headers/opal.o 00:03:50.881 CXX test/cpp_headers/opal_spec.o 00:03:50.881 CXX test/cpp_headers/pci_ids.o 00:03:50.881 CXX test/cpp_headers/pipe.o 00:03:50.881 CXX test/cpp_headers/queue.o 00:03:51.140 CXX test/cpp_headers/reduce.o 00:03:51.140 CXX test/cpp_headers/rpc.o 00:03:51.140 CXX test/cpp_headers/scheduler.o 00:03:51.140 CXX test/cpp_headers/scsi.o 00:03:51.140 CXX test/cpp_headers/scsi_spec.o 00:03:51.140 CXX test/cpp_headers/sock.o 00:03:51.140 CXX test/cpp_headers/stdinc.o 00:03:51.140 CXX test/cpp_headers/string.o 00:03:51.140 CXX test/cpp_headers/thread.o 00:03:51.140 CXX test/cpp_headers/trace.o 00:03:51.140 CXX test/cpp_headers/trace_parser.o 00:03:51.400 CXX test/cpp_headers/tree.o 00:03:51.400 CXX test/cpp_headers/ublk.o 00:03:51.400 CXX test/cpp_headers/util.o 00:03:51.400 CXX test/cpp_headers/uuid.o 00:03:51.400 CXX test/cpp_headers/version.o 00:03:51.400 CXX test/cpp_headers/vfio_user_pci.o 00:03:51.400 CXX test/cpp_headers/vfio_user_spec.o 00:03:51.400 CXX test/cpp_headers/vhost.o 00:03:51.400 CXX test/cpp_headers/vmd.o 00:03:51.400 CXX test/cpp_headers/xor.o 00:03:51.400 CXX test/cpp_headers/zipf.o 00:03:51.659 LINK cuse 00:03:54.194 LINK esnap 00:03:54.194 00:03:54.194 real 1m24.403s 00:03:54.194 user 7m13.530s 00:03:54.194 sys 1m52.855s 00:03:54.194 14:56:54 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:54.194 14:56:54 make -- common/autotest_common.sh@10 -- $ set +x 00:03:54.194 ************************************ 00:03:54.194 END TEST make 00:03:54.194 ************************************ 00:03:54.194 14:56:54 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:54.194 14:56:54 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:54.194 14:56:54 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:54.194 14:56:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:54.194 14:56:54 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:54.194 14:56:54 -- pm/common@44 -- $ pid=5280 00:03:54.194 14:56:54 -- pm/common@50 -- $ kill -TERM 5280 00:03:54.194 14:56:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:54.194 14:56:54 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:54.194 14:56:54 -- pm/common@44 -- $ pid=5282 00:03:54.194 14:56:54 -- pm/common@50 -- $ kill -TERM 5282 00:03:54.194 14:56:54 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:54.194 14:56:54 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:54.454 14:56:55 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:54.454 14:56:55 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:54.454 14:56:55 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:54.454 14:56:55 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:54.454 14:56:55 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:54.454 14:56:55 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:54.454 14:56:55 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:54.454 14:56:55 -- scripts/common.sh@336 -- # IFS=.-: 00:03:54.454 14:56:55 -- scripts/common.sh@336 -- # read -ra ver1 00:03:54.454 14:56:55 -- scripts/common.sh@337 -- # IFS=.-: 00:03:54.454 14:56:55 -- scripts/common.sh@337 -- # read -ra ver2 00:03:54.454 14:56:55 -- scripts/common.sh@338 -- # local 'op=<' 00:03:54.454 14:56:55 -- scripts/common.sh@340 -- # ver1_l=2 00:03:54.454 14:56:55 -- scripts/common.sh@341 -- # ver2_l=1 00:03:54.454 14:56:55 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:54.454 14:56:55 -- scripts/common.sh@344 -- # case "$op" in 00:03:54.454 14:56:55 -- scripts/common.sh@345 -- # : 1 00:03:54.454 14:56:55 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:54.454 14:56:55 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:54.454 14:56:55 -- scripts/common.sh@365 -- # decimal 1 00:03:54.454 14:56:55 -- scripts/common.sh@353 -- # local d=1 00:03:54.454 14:56:55 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:54.454 14:56:55 -- scripts/common.sh@355 -- # echo 1 00:03:54.454 14:56:55 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:54.454 14:56:55 -- scripts/common.sh@366 -- # decimal 2 00:03:54.454 14:56:55 -- scripts/common.sh@353 -- # local d=2 00:03:54.454 14:56:55 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:54.454 14:56:55 -- scripts/common.sh@355 -- # echo 2 00:03:54.454 14:56:55 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:54.454 14:56:55 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:54.454 14:56:55 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:54.454 14:56:55 -- scripts/common.sh@368 -- # return 0 00:03:54.454 14:56:55 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:54.454 14:56:55 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:54.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.454 --rc genhtml_branch_coverage=1 00:03:54.454 --rc genhtml_function_coverage=1 00:03:54.454 --rc genhtml_legend=1 00:03:54.454 --rc geninfo_all_blocks=1 00:03:54.454 --rc geninfo_unexecuted_blocks=1 00:03:54.454 00:03:54.454 ' 00:03:54.454 14:56:55 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:54.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.454 --rc genhtml_branch_coverage=1 00:03:54.454 --rc genhtml_function_coverage=1 00:03:54.454 --rc genhtml_legend=1 00:03:54.454 --rc geninfo_all_blocks=1 00:03:54.454 --rc geninfo_unexecuted_blocks=1 00:03:54.454 00:03:54.454 ' 00:03:54.454 14:56:55 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:54.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.454 --rc genhtml_branch_coverage=1 00:03:54.454 --rc genhtml_function_coverage=1 00:03:54.454 --rc genhtml_legend=1 00:03:54.454 --rc geninfo_all_blocks=1 00:03:54.454 --rc geninfo_unexecuted_blocks=1 00:03:54.454 00:03:54.454 ' 00:03:54.454 14:56:55 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:54.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.454 --rc genhtml_branch_coverage=1 00:03:54.454 --rc genhtml_function_coverage=1 00:03:54.454 --rc genhtml_legend=1 00:03:54.454 --rc geninfo_all_blocks=1 00:03:54.454 --rc geninfo_unexecuted_blocks=1 00:03:54.454 00:03:54.454 ' 00:03:54.454 14:56:55 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:54.454 14:56:55 -- nvmf/common.sh@7 -- # uname -s 00:03:54.454 14:56:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:54.454 14:56:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:54.454 14:56:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:54.454 14:56:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:54.454 14:56:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:54.454 14:56:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:54.454 14:56:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:54.454 14:56:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:54.454 14:56:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:54.454 14:56:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:54.454 14:56:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d5dbffa3-8145-4a26-bb17-cc33a438f929 00:03:54.454 14:56:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=d5dbffa3-8145-4a26-bb17-cc33a438f929 00:03:54.454 14:56:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:54.454 14:56:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:54.454 14:56:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:54.454 14:56:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:54.454 14:56:55 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:54.454 14:56:55 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:54.454 14:56:55 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:54.454 14:56:55 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:54.454 14:56:55 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:54.454 14:56:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:54.454 14:56:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:54.454 14:56:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:54.454 14:56:55 -- paths/export.sh@5 -- # export PATH 00:03:54.454 14:56:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:54.454 14:56:55 -- nvmf/common.sh@51 -- # : 0 00:03:54.454 14:56:55 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:54.454 14:56:55 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:54.454 14:56:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:54.454 14:56:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:54.454 14:56:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:54.455 14:56:55 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:54.455 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:54.455 14:56:55 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:54.455 14:56:55 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:54.455 14:56:55 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:54.455 14:56:55 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:54.455 14:56:55 -- spdk/autotest.sh@32 -- # uname -s 00:03:54.455 14:56:55 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:54.455 14:56:55 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:54.455 14:56:55 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:54.455 14:56:55 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:54.455 14:56:55 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:54.455 14:56:55 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:54.713 14:56:55 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:54.713 14:56:55 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:54.713 14:56:55 -- spdk/autotest.sh@48 -- # udevadm_pid=54736 00:03:54.713 14:56:55 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:54.713 14:56:55 -- pm/common@17 -- # local monitor 00:03:54.713 14:56:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:54.713 14:56:55 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:54.714 14:56:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:54.714 14:56:55 -- pm/common@25 -- # sleep 1 00:03:54.714 14:56:55 -- pm/common@21 -- # date +%s 00:03:54.714 14:56:55 -- pm/common@21 -- # date +%s 00:03:54.714 14:56:55 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732114615 00:03:54.714 14:56:55 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732114615 00:03:54.714 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732114615_collect-cpu-load.pm.log 00:03:54.714 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732114615_collect-vmstat.pm.log 00:03:55.648 14:56:56 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:55.648 14:56:56 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:55.648 14:56:56 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:55.648 14:56:56 -- common/autotest_common.sh@10 -- # set +x 00:03:55.648 14:56:56 -- spdk/autotest.sh@59 -- # create_test_list 00:03:55.648 14:56:56 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:55.648 14:56:56 -- common/autotest_common.sh@10 -- # set +x 00:03:55.648 14:56:56 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:55.648 14:56:56 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:55.648 14:56:56 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:55.648 14:56:56 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:55.648 14:56:56 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:55.648 14:56:56 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:55.648 14:56:56 -- common/autotest_common.sh@1457 -- # uname 00:03:55.648 14:56:56 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:55.649 14:56:56 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:55.649 14:56:56 -- common/autotest_common.sh@1477 -- # uname 00:03:55.649 14:56:56 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:55.649 14:56:56 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:55.649 14:56:56 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:55.908 lcov: LCOV version 1.15 00:03:55.908 14:56:56 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:10.796 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:10.797 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:28.895 14:57:27 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:28.895 14:57:27 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:28.895 14:57:27 -- common/autotest_common.sh@10 -- # set +x 00:04:28.895 14:57:27 -- spdk/autotest.sh@78 -- # rm -f 00:04:28.895 14:57:27 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:28.895 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:28.895 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:28.895 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:28.895 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:28.895 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:28.895 14:57:29 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:28.895 14:57:29 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:28.895 14:57:29 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:28.895 14:57:29 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:28.895 14:57:29 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:28.895 14:57:29 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:28.895 14:57:29 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:28.895 14:57:29 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:28.895 14:57:29 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:28.895 14:57:29 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:28.895 14:57:29 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:04:28.895 14:57:29 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:28.895 14:57:29 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:28.895 14:57:29 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:28.895 14:57:29 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:28.895 14:57:29 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:04:28.895 14:57:29 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:04:28.895 14:57:29 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:28.895 14:57:29 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:28.895 14:57:29 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:28.895 14:57:29 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:04:28.895 14:57:29 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:04:28.895 14:57:29 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:28.895 14:57:29 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:28.895 14:57:29 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:28.896 14:57:29 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:04:28.896 14:57:29 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:04:28.896 14:57:29 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:28.896 14:57:29 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:28.896 14:57:29 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:28.896 14:57:29 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:04:28.896 14:57:29 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:04:28.896 14:57:29 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:28.896 14:57:29 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:28.896 14:57:29 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:28.896 14:57:29 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:04:28.896 14:57:29 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:04:28.896 14:57:29 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:28.896 14:57:29 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:28.896 14:57:29 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:28.896 14:57:29 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:28.896 14:57:29 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:28.896 14:57:29 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:28.896 14:57:29 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:28.896 14:57:29 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:28.896 No valid GPT data, bailing 00:04:28.896 14:57:29 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:28.896 14:57:29 -- scripts/common.sh@394 -- # pt= 00:04:28.896 14:57:29 -- scripts/common.sh@395 -- # return 1 00:04:28.896 14:57:29 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:28.896 1+0 records in 00:04:28.896 1+0 records out 00:04:28.896 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0184184 s, 56.9 MB/s 00:04:28.896 14:57:29 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:28.896 14:57:29 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:28.896 14:57:29 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:28.896 14:57:29 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:28.896 14:57:29 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:28.896 No valid GPT data, bailing 00:04:28.896 14:57:29 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:28.896 14:57:29 -- scripts/common.sh@394 -- # pt= 00:04:28.896 14:57:29 -- scripts/common.sh@395 -- # return 1 00:04:28.896 14:57:29 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:28.896 1+0 records in 00:04:28.896 1+0 records out 00:04:28.896 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00550757 s, 190 MB/s 00:04:28.896 14:57:29 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:28.896 14:57:29 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:28.896 14:57:29 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:04:28.896 14:57:29 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:04:28.896 14:57:29 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:28.896 No valid GPT data, bailing 00:04:28.896 14:57:29 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:28.896 14:57:29 -- scripts/common.sh@394 -- # pt= 00:04:28.896 14:57:29 -- scripts/common.sh@395 -- # return 1 00:04:28.896 14:57:29 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:28.896 1+0 records in 00:04:28.896 1+0 records out 00:04:28.896 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00621057 s, 169 MB/s 00:04:28.896 14:57:29 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:28.896 14:57:29 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:28.896 14:57:29 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:04:28.896 14:57:29 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:04:28.896 14:57:29 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:04:28.896 No valid GPT data, bailing 00:04:28.896 14:57:29 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:28.896 14:57:29 -- scripts/common.sh@394 -- # pt= 00:04:28.896 14:57:29 -- scripts/common.sh@395 -- # return 1 00:04:28.896 14:57:29 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:04:28.896 1+0 records in 00:04:28.896 1+0 records out 00:04:28.896 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00621131 s, 169 MB/s 00:04:28.896 14:57:29 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:28.896 14:57:29 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:28.896 14:57:29 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:04:28.896 14:57:29 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:04:28.896 14:57:29 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:04:28.896 No valid GPT data, bailing 00:04:28.896 14:57:29 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:28.896 14:57:29 -- scripts/common.sh@394 -- # pt= 00:04:28.896 14:57:29 -- scripts/common.sh@395 -- # return 1 00:04:28.896 14:57:29 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:04:28.896 1+0 records in 00:04:28.896 1+0 records out 00:04:28.896 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00680171 s, 154 MB/s 00:04:28.896 14:57:29 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:28.896 14:57:29 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:28.896 14:57:29 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:04:28.896 14:57:29 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:04:28.896 14:57:29 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:28.896 No valid GPT data, bailing 00:04:28.896 14:57:29 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:28.896 14:57:29 -- scripts/common.sh@394 -- # pt= 00:04:28.896 14:57:29 -- scripts/common.sh@395 -- # return 1 00:04:28.896 14:57:29 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:28.896 1+0 records in 00:04:28.896 1+0 records out 00:04:28.896 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00441027 s, 238 MB/s 00:04:28.896 14:57:29 -- spdk/autotest.sh@105 -- # sync 00:04:29.155 14:57:29 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:29.155 14:57:29 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:29.155 14:57:29 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:32.443 14:57:32 -- spdk/autotest.sh@111 -- # uname -s 00:04:32.443 14:57:32 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:32.443 14:57:32 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:32.443 14:57:32 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:33.012 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:33.272 Hugepages 00:04:33.272 node hugesize free / total 00:04:33.272 node0 1048576kB 0 / 0 00:04:33.272 node0 2048kB 0 / 0 00:04:33.272 00:04:33.272 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:33.530 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:33.530 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:33.788 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:33.788 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:04:34.046 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:34.046 14:57:34 -- spdk/autotest.sh@117 -- # uname -s 00:04:34.046 14:57:34 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:34.046 14:57:34 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:34.046 14:57:34 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:34.611 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:35.545 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:35.545 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:35.545 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:35.545 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:35.804 14:57:36 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:36.739 14:57:37 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:36.739 14:57:37 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:36.739 14:57:37 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:36.739 14:57:37 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:36.739 14:57:37 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:36.739 14:57:37 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:36.739 14:57:37 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:36.739 14:57:37 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:36.739 14:57:37 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:36.739 14:57:37 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:36.739 14:57:37 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:36.739 14:57:37 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:37.305 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:37.564 Waiting for block devices as requested 00:04:37.822 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:37.822 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:37.822 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:38.081 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:43.343 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:43.343 14:57:43 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:43.343 14:57:43 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:43.343 14:57:43 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:43.343 14:57:43 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:43.343 14:57:43 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:43.343 14:57:43 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:43.343 14:57:43 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:43.343 14:57:43 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:43.343 14:57:43 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:43.343 14:57:43 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:43.343 14:57:43 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:43.343 14:57:43 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:43.343 14:57:43 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:43.343 14:57:43 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:43.343 14:57:43 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:43.343 14:57:43 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:43.343 14:57:43 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:43.343 14:57:43 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:43.343 14:57:43 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:43.343 14:57:43 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:43.343 14:57:43 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:43.343 14:57:43 -- common/autotest_common.sh@1543 -- # continue 00:04:43.343 14:57:43 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:43.343 14:57:43 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:43.343 14:57:43 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:43.343 14:57:43 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:43.343 14:57:43 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:43.343 14:57:43 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:43.343 14:57:43 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:43.343 14:57:43 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:43.343 14:57:43 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:43.343 14:57:43 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:43.343 14:57:43 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:43.343 14:57:43 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:43.343 14:57:43 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:43.343 14:57:44 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:43.343 14:57:44 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:43.343 14:57:44 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:43.343 14:57:44 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:43.343 14:57:44 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:43.343 14:57:44 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:43.343 14:57:44 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:43.343 14:57:44 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:43.343 14:57:44 -- common/autotest_common.sh@1543 -- # continue 00:04:43.343 14:57:44 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:43.343 14:57:44 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:43.343 14:57:44 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:43.343 14:57:44 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:04:43.343 14:57:44 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:43.343 14:57:44 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:43.343 14:57:44 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:43.343 14:57:44 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:04:43.343 14:57:44 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:04:43.343 14:57:44 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:04:43.343 14:57:44 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:04:43.343 14:57:44 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:43.343 14:57:44 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:43.343 14:57:44 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:43.343 14:57:44 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:43.343 14:57:44 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:43.343 14:57:44 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:04:43.343 14:57:44 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:43.343 14:57:44 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:43.343 14:57:44 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:43.343 14:57:44 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:43.343 14:57:44 -- common/autotest_common.sh@1543 -- # continue 00:04:43.343 14:57:44 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:43.344 14:57:44 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:43.344 14:57:44 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:43.344 14:57:44 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:04:43.344 14:57:44 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:43.344 14:57:44 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:43.344 14:57:44 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:43.344 14:57:44 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:04:43.344 14:57:44 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:04:43.344 14:57:44 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:04:43.344 14:57:44 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:04:43.344 14:57:44 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:43.344 14:57:44 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:43.344 14:57:44 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:43.344 14:57:44 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:43.344 14:57:44 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:43.344 14:57:44 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:04:43.344 14:57:44 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:43.344 14:57:44 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:43.344 14:57:44 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:43.344 14:57:44 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:43.344 14:57:44 -- common/autotest_common.sh@1543 -- # continue 00:04:43.344 14:57:44 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:43.344 14:57:44 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:43.344 14:57:44 -- common/autotest_common.sh@10 -- # set +x 00:04:43.603 14:57:44 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:43.603 14:57:44 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:43.603 14:57:44 -- common/autotest_common.sh@10 -- # set +x 00:04:43.603 14:57:44 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:44.171 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:45.108 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:45.108 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:45.108 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:45.108 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:45.367 14:57:45 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:45.367 14:57:45 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:45.367 14:57:45 -- common/autotest_common.sh@10 -- # set +x 00:04:45.367 14:57:46 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:45.367 14:57:46 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:45.367 14:57:46 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:45.367 14:57:46 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:45.367 14:57:46 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:45.367 14:57:46 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:45.367 14:57:46 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:45.367 14:57:46 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:45.367 14:57:46 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:45.367 14:57:46 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:45.367 14:57:46 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:45.367 14:57:46 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:45.367 14:57:46 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:45.367 14:57:46 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:45.367 14:57:46 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:45.367 14:57:46 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:45.367 14:57:46 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:45.367 14:57:46 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:45.367 14:57:46 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:45.367 14:57:46 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:45.367 14:57:46 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:45.367 14:57:46 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:45.367 14:57:46 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:45.367 14:57:46 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:45.367 14:57:46 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:45.367 14:57:46 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:45.367 14:57:46 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:45.367 14:57:46 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:45.367 14:57:46 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:45.367 14:57:46 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:45.367 14:57:46 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:45.367 14:57:46 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:45.367 14:57:46 -- common/autotest_common.sh@1572 -- # return 0 00:04:45.367 14:57:46 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:45.367 14:57:46 -- common/autotest_common.sh@1580 -- # return 0 00:04:45.367 14:57:46 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:45.367 14:57:46 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:45.367 14:57:46 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:45.367 14:57:46 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:45.367 14:57:46 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:45.367 14:57:46 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:45.367 14:57:46 -- common/autotest_common.sh@10 -- # set +x 00:04:45.367 14:57:46 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:45.367 14:57:46 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:45.367 14:57:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.367 14:57:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.367 14:57:46 -- common/autotest_common.sh@10 -- # set +x 00:04:45.367 ************************************ 00:04:45.367 START TEST env 00:04:45.367 ************************************ 00:04:45.367 14:57:46 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:45.627 * Looking for test storage... 00:04:45.627 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:45.627 14:57:46 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:45.627 14:57:46 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:45.627 14:57:46 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:45.627 14:57:46 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:45.627 14:57:46 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:45.627 14:57:46 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:45.627 14:57:46 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:45.627 14:57:46 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:45.627 14:57:46 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:45.627 14:57:46 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:45.627 14:57:46 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:45.627 14:57:46 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:45.627 14:57:46 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:45.627 14:57:46 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:45.627 14:57:46 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:45.628 14:57:46 env -- scripts/common.sh@344 -- # case "$op" in 00:04:45.628 14:57:46 env -- scripts/common.sh@345 -- # : 1 00:04:45.628 14:57:46 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:45.628 14:57:46 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:45.628 14:57:46 env -- scripts/common.sh@365 -- # decimal 1 00:04:45.628 14:57:46 env -- scripts/common.sh@353 -- # local d=1 00:04:45.628 14:57:46 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:45.628 14:57:46 env -- scripts/common.sh@355 -- # echo 1 00:04:45.628 14:57:46 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:45.628 14:57:46 env -- scripts/common.sh@366 -- # decimal 2 00:04:45.628 14:57:46 env -- scripts/common.sh@353 -- # local d=2 00:04:45.628 14:57:46 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:45.628 14:57:46 env -- scripts/common.sh@355 -- # echo 2 00:04:45.628 14:57:46 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:45.628 14:57:46 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:45.628 14:57:46 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:45.628 14:57:46 env -- scripts/common.sh@368 -- # return 0 00:04:45.628 14:57:46 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:45.628 14:57:46 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:45.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.628 --rc genhtml_branch_coverage=1 00:04:45.628 --rc genhtml_function_coverage=1 00:04:45.628 --rc genhtml_legend=1 00:04:45.628 --rc geninfo_all_blocks=1 00:04:45.628 --rc geninfo_unexecuted_blocks=1 00:04:45.628 00:04:45.628 ' 00:04:45.628 14:57:46 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:45.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.628 --rc genhtml_branch_coverage=1 00:04:45.628 --rc genhtml_function_coverage=1 00:04:45.628 --rc genhtml_legend=1 00:04:45.628 --rc geninfo_all_blocks=1 00:04:45.628 --rc geninfo_unexecuted_blocks=1 00:04:45.628 00:04:45.628 ' 00:04:45.628 14:57:46 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:45.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.628 --rc genhtml_branch_coverage=1 00:04:45.628 --rc genhtml_function_coverage=1 00:04:45.628 --rc genhtml_legend=1 00:04:45.628 --rc geninfo_all_blocks=1 00:04:45.628 --rc geninfo_unexecuted_blocks=1 00:04:45.628 00:04:45.628 ' 00:04:45.628 14:57:46 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:45.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.628 --rc genhtml_branch_coverage=1 00:04:45.628 --rc genhtml_function_coverage=1 00:04:45.628 --rc genhtml_legend=1 00:04:45.628 --rc geninfo_all_blocks=1 00:04:45.628 --rc geninfo_unexecuted_blocks=1 00:04:45.628 00:04:45.628 ' 00:04:45.628 14:57:46 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:45.628 14:57:46 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.628 14:57:46 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.628 14:57:46 env -- common/autotest_common.sh@10 -- # set +x 00:04:45.888 ************************************ 00:04:45.888 START TEST env_memory 00:04:45.888 ************************************ 00:04:45.888 14:57:46 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:45.888 00:04:45.888 00:04:45.888 CUnit - A unit testing framework for C - Version 2.1-3 00:04:45.888 http://cunit.sourceforge.net/ 00:04:45.888 00:04:45.888 00:04:45.888 Suite: memory 00:04:45.888 Test: alloc and free memory map ...[2024-11-20 14:57:46.539132] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:45.888 passed 00:04:45.888 Test: mem map translation ...[2024-11-20 14:57:46.598380] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:45.888 [2024-11-20 14:57:46.598815] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:45.888 [2024-11-20 14:57:46.599254] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:45.888 [2024-11-20 14:57:46.599622] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:45.888 passed 00:04:45.888 Test: mem map registration ...[2024-11-20 14:57:46.686965] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:45.888 [2024-11-20 14:57:46.687059] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:46.148 passed 00:04:46.148 Test: mem map adjacent registrations ...passed 00:04:46.148 00:04:46.148 Run Summary: Type Total Ran Passed Failed Inactive 00:04:46.148 suites 1 1 n/a 0 0 00:04:46.148 tests 4 4 4 0 0 00:04:46.148 asserts 152 152 152 0 n/a 00:04:46.148 00:04:46.148 Elapsed time = 0.313 seconds 00:04:46.148 00:04:46.148 real 0m0.364s 00:04:46.148 user 0m0.321s 00:04:46.148 sys 0m0.031s 00:04:46.148 14:57:46 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.148 ************************************ 00:04:46.148 END TEST env_memory 00:04:46.148 ************************************ 00:04:46.148 14:57:46 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:46.148 14:57:46 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:46.148 14:57:46 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.148 14:57:46 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.148 14:57:46 env -- common/autotest_common.sh@10 -- # set +x 00:04:46.148 ************************************ 00:04:46.148 START TEST env_vtophys 00:04:46.148 ************************************ 00:04:46.148 14:57:46 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:46.148 EAL: lib.eal log level changed from notice to debug 00:04:46.148 EAL: Detected lcore 0 as core 0 on socket 0 00:04:46.148 EAL: Detected lcore 1 as core 0 on socket 0 00:04:46.148 EAL: Detected lcore 2 as core 0 on socket 0 00:04:46.148 EAL: Detected lcore 3 as core 0 on socket 0 00:04:46.148 EAL: Detected lcore 4 as core 0 on socket 0 00:04:46.148 EAL: Detected lcore 5 as core 0 on socket 0 00:04:46.148 EAL: Detected lcore 6 as core 0 on socket 0 00:04:46.148 EAL: Detected lcore 7 as core 0 on socket 0 00:04:46.148 EAL: Detected lcore 8 as core 0 on socket 0 00:04:46.148 EAL: Detected lcore 9 as core 0 on socket 0 00:04:46.148 EAL: Maximum logical cores by configuration: 128 00:04:46.148 EAL: Detected CPU lcores: 10 00:04:46.148 EAL: Detected NUMA nodes: 1 00:04:46.148 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:46.148 EAL: Detected shared linkage of DPDK 00:04:46.408 EAL: No shared files mode enabled, IPC will be disabled 00:04:46.408 EAL: Selected IOVA mode 'PA' 00:04:46.408 EAL: Probing VFIO support... 00:04:46.408 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:46.408 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:46.408 EAL: Ask a virtual area of 0x2e000 bytes 00:04:46.408 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:46.408 EAL: Setting up physically contiguous memory... 00:04:46.408 EAL: Setting maximum number of open files to 524288 00:04:46.408 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:46.408 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:46.408 EAL: Ask a virtual area of 0x61000 bytes 00:04:46.408 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:46.408 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:46.408 EAL: Ask a virtual area of 0x400000000 bytes 00:04:46.408 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:46.408 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:46.408 EAL: Ask a virtual area of 0x61000 bytes 00:04:46.408 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:46.408 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:46.408 EAL: Ask a virtual area of 0x400000000 bytes 00:04:46.408 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:46.408 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:46.408 EAL: Ask a virtual area of 0x61000 bytes 00:04:46.408 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:46.408 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:46.408 EAL: Ask a virtual area of 0x400000000 bytes 00:04:46.408 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:46.408 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:46.408 EAL: Ask a virtual area of 0x61000 bytes 00:04:46.408 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:46.408 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:46.408 EAL: Ask a virtual area of 0x400000000 bytes 00:04:46.408 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:46.408 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:46.408 EAL: Hugepages will be freed exactly as allocated. 00:04:46.408 EAL: No shared files mode enabled, IPC is disabled 00:04:46.408 EAL: No shared files mode enabled, IPC is disabled 00:04:46.408 EAL: TSC frequency is ~2490000 KHz 00:04:46.408 EAL: Main lcore 0 is ready (tid=7f0a51c4ca40;cpuset=[0]) 00:04:46.408 EAL: Trying to obtain current memory policy. 00:04:46.408 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:46.408 EAL: Restoring previous memory policy: 0 00:04:46.408 EAL: request: mp_malloc_sync 00:04:46.408 EAL: No shared files mode enabled, IPC is disabled 00:04:46.408 EAL: Heap on socket 0 was expanded by 2MB 00:04:46.408 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:46.408 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:46.408 EAL: Mem event callback 'spdk:(nil)' registered 00:04:46.408 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:46.408 00:04:46.408 00:04:46.409 CUnit - A unit testing framework for C - Version 2.1-3 00:04:46.409 http://cunit.sourceforge.net/ 00:04:46.409 00:04:46.409 00:04:46.409 Suite: components_suite 00:04:46.977 Test: vtophys_malloc_test ...passed 00:04:46.977 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:46.977 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:46.977 EAL: Restoring previous memory policy: 4 00:04:46.977 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.977 EAL: request: mp_malloc_sync 00:04:46.977 EAL: No shared files mode enabled, IPC is disabled 00:04:46.977 EAL: Heap on socket 0 was expanded by 4MB 00:04:46.977 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.977 EAL: request: mp_malloc_sync 00:04:46.977 EAL: No shared files mode enabled, IPC is disabled 00:04:46.977 EAL: Heap on socket 0 was shrunk by 4MB 00:04:46.977 EAL: Trying to obtain current memory policy. 00:04:46.977 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:46.977 EAL: Restoring previous memory policy: 4 00:04:46.977 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.977 EAL: request: mp_malloc_sync 00:04:46.977 EAL: No shared files mode enabled, IPC is disabled 00:04:46.977 EAL: Heap on socket 0 was expanded by 6MB 00:04:46.977 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.977 EAL: request: mp_malloc_sync 00:04:46.977 EAL: No shared files mode enabled, IPC is disabled 00:04:46.977 EAL: Heap on socket 0 was shrunk by 6MB 00:04:46.977 EAL: Trying to obtain current memory policy. 00:04:46.977 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:46.977 EAL: Restoring previous memory policy: 4 00:04:46.977 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.977 EAL: request: mp_malloc_sync 00:04:46.977 EAL: No shared files mode enabled, IPC is disabled 00:04:46.977 EAL: Heap on socket 0 was expanded by 10MB 00:04:46.977 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.977 EAL: request: mp_malloc_sync 00:04:46.977 EAL: No shared files mode enabled, IPC is disabled 00:04:46.977 EAL: Heap on socket 0 was shrunk by 10MB 00:04:46.977 EAL: Trying to obtain current memory policy. 00:04:46.977 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:46.977 EAL: Restoring previous memory policy: 4 00:04:46.977 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.977 EAL: request: mp_malloc_sync 00:04:46.977 EAL: No shared files mode enabled, IPC is disabled 00:04:46.977 EAL: Heap on socket 0 was expanded by 18MB 00:04:46.977 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.977 EAL: request: mp_malloc_sync 00:04:46.977 EAL: No shared files mode enabled, IPC is disabled 00:04:46.977 EAL: Heap on socket 0 was shrunk by 18MB 00:04:46.977 EAL: Trying to obtain current memory policy. 00:04:46.977 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:46.977 EAL: Restoring previous memory policy: 4 00:04:46.977 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.977 EAL: request: mp_malloc_sync 00:04:46.977 EAL: No shared files mode enabled, IPC is disabled 00:04:46.977 EAL: Heap on socket 0 was expanded by 34MB 00:04:47.236 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.236 EAL: request: mp_malloc_sync 00:04:47.236 EAL: No shared files mode enabled, IPC is disabled 00:04:47.236 EAL: Heap on socket 0 was shrunk by 34MB 00:04:47.236 EAL: Trying to obtain current memory policy. 00:04:47.236 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.236 EAL: Restoring previous memory policy: 4 00:04:47.236 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.236 EAL: request: mp_malloc_sync 00:04:47.236 EAL: No shared files mode enabled, IPC is disabled 00:04:47.236 EAL: Heap on socket 0 was expanded by 66MB 00:04:47.236 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.236 EAL: request: mp_malloc_sync 00:04:47.236 EAL: No shared files mode enabled, IPC is disabled 00:04:47.236 EAL: Heap on socket 0 was shrunk by 66MB 00:04:47.495 EAL: Trying to obtain current memory policy. 00:04:47.495 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.495 EAL: Restoring previous memory policy: 4 00:04:47.495 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.495 EAL: request: mp_malloc_sync 00:04:47.495 EAL: No shared files mode enabled, IPC is disabled 00:04:47.495 EAL: Heap on socket 0 was expanded by 130MB 00:04:47.755 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.755 EAL: request: mp_malloc_sync 00:04:47.755 EAL: No shared files mode enabled, IPC is disabled 00:04:47.755 EAL: Heap on socket 0 was shrunk by 130MB 00:04:48.014 EAL: Trying to obtain current memory policy. 00:04:48.014 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.014 EAL: Restoring previous memory policy: 4 00:04:48.014 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.014 EAL: request: mp_malloc_sync 00:04:48.014 EAL: No shared files mode enabled, IPC is disabled 00:04:48.014 EAL: Heap on socket 0 was expanded by 258MB 00:04:48.586 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.586 EAL: request: mp_malloc_sync 00:04:48.586 EAL: No shared files mode enabled, IPC is disabled 00:04:48.586 EAL: Heap on socket 0 was shrunk by 258MB 00:04:48.845 EAL: Trying to obtain current memory policy. 00:04:48.845 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.104 EAL: Restoring previous memory policy: 4 00:04:49.104 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.104 EAL: request: mp_malloc_sync 00:04:49.104 EAL: No shared files mode enabled, IPC is disabled 00:04:49.104 EAL: Heap on socket 0 was expanded by 514MB 00:04:50.041 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.041 EAL: request: mp_malloc_sync 00:04:50.041 EAL: No shared files mode enabled, IPC is disabled 00:04:50.041 EAL: Heap on socket 0 was shrunk by 514MB 00:04:50.977 EAL: Trying to obtain current memory policy. 00:04:50.977 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:51.236 EAL: Restoring previous memory policy: 4 00:04:51.236 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.236 EAL: request: mp_malloc_sync 00:04:51.236 EAL: No shared files mode enabled, IPC is disabled 00:04:51.236 EAL: Heap on socket 0 was expanded by 1026MB 00:04:53.140 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.399 EAL: request: mp_malloc_sync 00:04:53.399 EAL: No shared files mode enabled, IPC is disabled 00:04:53.399 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:55.313 passed 00:04:55.313 00:04:55.313 Run Summary: Type Total Ran Passed Failed Inactive 00:04:55.313 suites 1 1 n/a 0 0 00:04:55.313 tests 2 2 2 0 0 00:04:55.313 asserts 5838 5838 5838 0 n/a 00:04:55.313 00:04:55.313 Elapsed time = 8.556 seconds 00:04:55.313 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.313 EAL: request: mp_malloc_sync 00:04:55.313 EAL: No shared files mode enabled, IPC is disabled 00:04:55.313 EAL: Heap on socket 0 was shrunk by 2MB 00:04:55.313 EAL: No shared files mode enabled, IPC is disabled 00:04:55.313 EAL: No shared files mode enabled, IPC is disabled 00:04:55.313 EAL: No shared files mode enabled, IPC is disabled 00:04:55.313 00:04:55.313 real 0m8.920s 00:04:55.313 user 0m7.817s 00:04:55.313 sys 0m0.926s 00:04:55.313 ************************************ 00:04:55.313 END TEST env_vtophys 00:04:55.313 ************************************ 00:04:55.313 14:57:55 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.313 14:57:55 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:55.313 14:57:55 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:55.313 14:57:55 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.313 14:57:55 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.313 14:57:55 env -- common/autotest_common.sh@10 -- # set +x 00:04:55.313 ************************************ 00:04:55.313 START TEST env_pci 00:04:55.313 ************************************ 00:04:55.313 14:57:55 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:55.313 00:04:55.313 00:04:55.313 CUnit - A unit testing framework for C - Version 2.1-3 00:04:55.313 http://cunit.sourceforge.net/ 00:04:55.313 00:04:55.313 00:04:55.313 Suite: pci 00:04:55.313 Test: pci_hook ...[2024-11-20 14:57:55.955858] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57615 has claimed it 00:04:55.313 passed 00:04:55.313 00:04:55.313 Run Summary: Type Total Ran Passed Failed Inactive 00:04:55.313 suites 1 1 n/a 0 0 00:04:55.313 tests 1 1 1 0 0 00:04:55.313 asserts 25 25 25 0 n/a 00:04:55.313 00:04:55.313 Elapsed time = 0.010 seconds 00:04:55.313 EAL: Cannot find device (10000:00:01.0) 00:04:55.313 EAL: Failed to attach device on primary process 00:04:55.313 00:04:55.313 real 0m0.133s 00:04:55.313 user 0m0.049s 00:04:55.313 sys 0m0.082s 00:04:55.313 ************************************ 00:04:55.313 END TEST env_pci 00:04:55.313 ************************************ 00:04:55.313 14:57:56 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.313 14:57:56 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:55.313 14:57:56 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:55.313 14:57:56 env -- env/env.sh@15 -- # uname 00:04:55.313 14:57:56 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:55.313 14:57:56 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:55.313 14:57:56 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:55.313 14:57:56 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:55.313 14:57:56 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.313 14:57:56 env -- common/autotest_common.sh@10 -- # set +x 00:04:55.313 ************************************ 00:04:55.313 START TEST env_dpdk_post_init 00:04:55.313 ************************************ 00:04:55.313 14:57:56 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:55.572 EAL: Detected CPU lcores: 10 00:04:55.572 EAL: Detected NUMA nodes: 1 00:04:55.572 EAL: Detected shared linkage of DPDK 00:04:55.572 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:55.572 EAL: Selected IOVA mode 'PA' 00:04:55.572 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:55.572 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:55.572 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:55.572 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:04:55.572 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:04:55.831 Starting DPDK initialization... 00:04:55.831 Starting SPDK post initialization... 00:04:55.831 SPDK NVMe probe 00:04:55.831 Attaching to 0000:00:10.0 00:04:55.831 Attaching to 0000:00:11.0 00:04:55.831 Attaching to 0000:00:12.0 00:04:55.831 Attaching to 0000:00:13.0 00:04:55.831 Attached to 0000:00:10.0 00:04:55.831 Attached to 0000:00:11.0 00:04:55.831 Attached to 0000:00:13.0 00:04:55.831 Attached to 0000:00:12.0 00:04:55.831 Cleaning up... 00:04:55.831 00:04:55.831 real 0m0.340s 00:04:55.831 user 0m0.123s 00:04:55.831 sys 0m0.119s 00:04:55.831 14:57:56 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.831 ************************************ 00:04:55.831 END TEST env_dpdk_post_init 00:04:55.831 ************************************ 00:04:55.831 14:57:56 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:55.831 14:57:56 env -- env/env.sh@26 -- # uname 00:04:55.831 14:57:56 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:55.831 14:57:56 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:55.831 14:57:56 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.831 14:57:56 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.831 14:57:56 env -- common/autotest_common.sh@10 -- # set +x 00:04:55.831 ************************************ 00:04:55.831 START TEST env_mem_callbacks 00:04:55.831 ************************************ 00:04:55.831 14:57:56 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:55.831 EAL: Detected CPU lcores: 10 00:04:55.831 EAL: Detected NUMA nodes: 1 00:04:55.831 EAL: Detected shared linkage of DPDK 00:04:55.831 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:55.831 EAL: Selected IOVA mode 'PA' 00:04:56.090 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:56.090 00:04:56.090 00:04:56.090 CUnit - A unit testing framework for C - Version 2.1-3 00:04:56.090 http://cunit.sourceforge.net/ 00:04:56.090 00:04:56.090 00:04:56.090 Suite: memory 00:04:56.090 Test: test ... 00:04:56.090 register 0x200000200000 2097152 00:04:56.090 malloc 3145728 00:04:56.090 register 0x200000400000 4194304 00:04:56.090 buf 0x2000004fffc0 len 3145728 PASSED 00:04:56.090 malloc 64 00:04:56.090 buf 0x2000004ffec0 len 64 PASSED 00:04:56.090 malloc 4194304 00:04:56.090 register 0x200000800000 6291456 00:04:56.090 buf 0x2000009fffc0 len 4194304 PASSED 00:04:56.090 free 0x2000004fffc0 3145728 00:04:56.090 free 0x2000004ffec0 64 00:04:56.090 unregister 0x200000400000 4194304 PASSED 00:04:56.090 free 0x2000009fffc0 4194304 00:04:56.090 unregister 0x200000800000 6291456 PASSED 00:04:56.090 malloc 8388608 00:04:56.090 register 0x200000400000 10485760 00:04:56.090 buf 0x2000005fffc0 len 8388608 PASSED 00:04:56.090 free 0x2000005fffc0 8388608 00:04:56.090 unregister 0x200000400000 10485760 PASSED 00:04:56.090 passed 00:04:56.090 00:04:56.090 Run Summary: Type Total Ran Passed Failed Inactive 00:04:56.090 suites 1 1 n/a 0 0 00:04:56.090 tests 1 1 1 0 0 00:04:56.090 asserts 15 15 15 0 n/a 00:04:56.090 00:04:56.090 Elapsed time = 0.089 seconds 00:04:56.090 00:04:56.090 real 0m0.315s 00:04:56.090 user 0m0.114s 00:04:56.090 sys 0m0.096s 00:04:56.090 14:57:56 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.090 ************************************ 00:04:56.090 END TEST env_mem_callbacks 00:04:56.090 ************************************ 00:04:56.090 14:57:56 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:56.090 ************************************ 00:04:56.090 END TEST env 00:04:56.090 ************************************ 00:04:56.090 00:04:56.090 real 0m10.716s 00:04:56.090 user 0m8.689s 00:04:56.090 sys 0m1.634s 00:04:56.090 14:57:56 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.090 14:57:56 env -- common/autotest_common.sh@10 -- # set +x 00:04:56.349 14:57:56 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:56.349 14:57:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.349 14:57:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.349 14:57:56 -- common/autotest_common.sh@10 -- # set +x 00:04:56.349 ************************************ 00:04:56.349 START TEST rpc 00:04:56.349 ************************************ 00:04:56.349 14:57:56 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:56.349 * Looking for test storage... 00:04:56.349 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:56.349 14:57:57 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:56.349 14:57:57 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:56.349 14:57:57 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:56.608 14:57:57 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:56.608 14:57:57 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:56.608 14:57:57 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:56.608 14:57:57 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:56.608 14:57:57 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:56.608 14:57:57 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:56.608 14:57:57 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:56.608 14:57:57 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:56.608 14:57:57 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:56.608 14:57:57 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:56.608 14:57:57 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:56.608 14:57:57 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:56.608 14:57:57 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:56.608 14:57:57 rpc -- scripts/common.sh@345 -- # : 1 00:04:56.608 14:57:57 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:56.608 14:57:57 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:56.608 14:57:57 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:56.608 14:57:57 rpc -- scripts/common.sh@353 -- # local d=1 00:04:56.608 14:57:57 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:56.608 14:57:57 rpc -- scripts/common.sh@355 -- # echo 1 00:04:56.608 14:57:57 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:56.608 14:57:57 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:56.608 14:57:57 rpc -- scripts/common.sh@353 -- # local d=2 00:04:56.608 14:57:57 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:56.608 14:57:57 rpc -- scripts/common.sh@355 -- # echo 2 00:04:56.608 14:57:57 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:56.608 14:57:57 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:56.608 14:57:57 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:56.608 14:57:57 rpc -- scripts/common.sh@368 -- # return 0 00:04:56.608 14:57:57 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:56.608 14:57:57 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:56.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.608 --rc genhtml_branch_coverage=1 00:04:56.608 --rc genhtml_function_coverage=1 00:04:56.608 --rc genhtml_legend=1 00:04:56.608 --rc geninfo_all_blocks=1 00:04:56.608 --rc geninfo_unexecuted_blocks=1 00:04:56.608 00:04:56.608 ' 00:04:56.608 14:57:57 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:56.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.608 --rc genhtml_branch_coverage=1 00:04:56.608 --rc genhtml_function_coverage=1 00:04:56.608 --rc genhtml_legend=1 00:04:56.608 --rc geninfo_all_blocks=1 00:04:56.608 --rc geninfo_unexecuted_blocks=1 00:04:56.608 00:04:56.608 ' 00:04:56.608 14:57:57 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:56.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.608 --rc genhtml_branch_coverage=1 00:04:56.608 --rc genhtml_function_coverage=1 00:04:56.608 --rc genhtml_legend=1 00:04:56.608 --rc geninfo_all_blocks=1 00:04:56.608 --rc geninfo_unexecuted_blocks=1 00:04:56.608 00:04:56.608 ' 00:04:56.608 14:57:57 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:56.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.608 --rc genhtml_branch_coverage=1 00:04:56.608 --rc genhtml_function_coverage=1 00:04:56.608 --rc genhtml_legend=1 00:04:56.608 --rc geninfo_all_blocks=1 00:04:56.608 --rc geninfo_unexecuted_blocks=1 00:04:56.608 00:04:56.608 ' 00:04:56.608 14:57:57 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:56.608 14:57:57 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57747 00:04:56.608 14:57:57 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:56.608 14:57:57 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57747 00:04:56.608 14:57:57 rpc -- common/autotest_common.sh@835 -- # '[' -z 57747 ']' 00:04:56.608 14:57:57 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.608 14:57:57 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:56.609 14:57:57 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.609 14:57:57 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:56.609 14:57:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.609 [2024-11-20 14:57:57.349426] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:04:56.609 [2024-11-20 14:57:57.349804] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57747 ] 00:04:56.867 [2024-11-20 14:57:57.533950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.867 [2024-11-20 14:57:57.674439] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:56.867 [2024-11-20 14:57:57.674767] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57747' to capture a snapshot of events at runtime. 00:04:56.867 [2024-11-20 14:57:57.674964] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:56.867 [2024-11-20 14:57:57.675023] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:56.867 [2024-11-20 14:57:57.675151] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57747 for offline analysis/debug. 00:04:56.867 [2024-11-20 14:57:57.676625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.243 14:57:58 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:58.243 14:57:58 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:58.243 14:57:58 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:58.244 14:57:58 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:58.244 14:57:58 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:58.244 14:57:58 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:58.244 14:57:58 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.244 14:57:58 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.244 14:57:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.244 ************************************ 00:04:58.244 START TEST rpc_integrity 00:04:58.244 ************************************ 00:04:58.244 14:57:58 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:58.244 14:57:58 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:58.244 14:57:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.244 14:57:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.244 14:57:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.244 14:57:58 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:58.244 14:57:58 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:58.244 14:57:58 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:58.244 14:57:58 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:58.244 14:57:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.244 14:57:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.244 14:57:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.244 14:57:58 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:58.244 14:57:58 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:58.244 14:57:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.244 14:57:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.244 14:57:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.244 14:57:58 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:58.244 { 00:04:58.244 "name": "Malloc0", 00:04:58.244 "aliases": [ 00:04:58.244 "130f202e-fb36-40cc-811c-ed8c687f07ce" 00:04:58.244 ], 00:04:58.244 "product_name": "Malloc disk", 00:04:58.244 "block_size": 512, 00:04:58.244 "num_blocks": 16384, 00:04:58.244 "uuid": "130f202e-fb36-40cc-811c-ed8c687f07ce", 00:04:58.244 "assigned_rate_limits": { 00:04:58.244 "rw_ios_per_sec": 0, 00:04:58.244 "rw_mbytes_per_sec": 0, 00:04:58.244 "r_mbytes_per_sec": 0, 00:04:58.244 "w_mbytes_per_sec": 0 00:04:58.244 }, 00:04:58.244 "claimed": false, 00:04:58.244 "zoned": false, 00:04:58.244 "supported_io_types": { 00:04:58.244 "read": true, 00:04:58.244 "write": true, 00:04:58.244 "unmap": true, 00:04:58.244 "flush": true, 00:04:58.244 "reset": true, 00:04:58.244 "nvme_admin": false, 00:04:58.244 "nvme_io": false, 00:04:58.244 "nvme_io_md": false, 00:04:58.244 "write_zeroes": true, 00:04:58.244 "zcopy": true, 00:04:58.244 "get_zone_info": false, 00:04:58.244 "zone_management": false, 00:04:58.244 "zone_append": false, 00:04:58.244 "compare": false, 00:04:58.244 "compare_and_write": false, 00:04:58.244 "abort": true, 00:04:58.244 "seek_hole": false, 00:04:58.244 "seek_data": false, 00:04:58.244 "copy": true, 00:04:58.244 "nvme_iov_md": false 00:04:58.244 }, 00:04:58.244 "memory_domains": [ 00:04:58.244 { 00:04:58.244 "dma_device_id": "system", 00:04:58.244 "dma_device_type": 1 00:04:58.244 }, 00:04:58.244 { 00:04:58.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:58.244 "dma_device_type": 2 00:04:58.244 } 00:04:58.244 ], 00:04:58.244 "driver_specific": {} 00:04:58.244 } 00:04:58.244 ]' 00:04:58.244 14:57:58 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:58.244 14:57:58 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:58.244 14:57:58 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:58.244 14:57:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.244 14:57:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.244 [2024-11-20 14:57:58.935270] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:58.244 [2024-11-20 14:57:58.935388] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:58.244 [2024-11-20 14:57:58.935428] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:04:58.244 [2024-11-20 14:57:58.935445] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:58.244 [2024-11-20 14:57:58.938657] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:58.244 [2024-11-20 14:57:58.938739] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:58.244 Passthru0 00:04:58.244 14:57:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.244 14:57:58 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:58.244 14:57:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.244 14:57:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.244 14:57:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.244 14:57:58 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:58.244 { 00:04:58.244 "name": "Malloc0", 00:04:58.244 "aliases": [ 00:04:58.244 "130f202e-fb36-40cc-811c-ed8c687f07ce" 00:04:58.244 ], 00:04:58.244 "product_name": "Malloc disk", 00:04:58.244 "block_size": 512, 00:04:58.244 "num_blocks": 16384, 00:04:58.244 "uuid": "130f202e-fb36-40cc-811c-ed8c687f07ce", 00:04:58.244 "assigned_rate_limits": { 00:04:58.244 "rw_ios_per_sec": 0, 00:04:58.244 "rw_mbytes_per_sec": 0, 00:04:58.244 "r_mbytes_per_sec": 0, 00:04:58.244 "w_mbytes_per_sec": 0 00:04:58.244 }, 00:04:58.244 "claimed": true, 00:04:58.244 "claim_type": "exclusive_write", 00:04:58.244 "zoned": false, 00:04:58.244 "supported_io_types": { 00:04:58.244 "read": true, 00:04:58.244 "write": true, 00:04:58.244 "unmap": true, 00:04:58.244 "flush": true, 00:04:58.244 "reset": true, 00:04:58.244 "nvme_admin": false, 00:04:58.244 "nvme_io": false, 00:04:58.244 "nvme_io_md": false, 00:04:58.244 "write_zeroes": true, 00:04:58.244 "zcopy": true, 00:04:58.244 "get_zone_info": false, 00:04:58.244 "zone_management": false, 00:04:58.244 "zone_append": false, 00:04:58.244 "compare": false, 00:04:58.244 "compare_and_write": false, 00:04:58.244 "abort": true, 00:04:58.244 "seek_hole": false, 00:04:58.244 "seek_data": false, 00:04:58.244 "copy": true, 00:04:58.244 "nvme_iov_md": false 00:04:58.244 }, 00:04:58.244 "memory_domains": [ 00:04:58.244 { 00:04:58.244 "dma_device_id": "system", 00:04:58.244 "dma_device_type": 1 00:04:58.244 }, 00:04:58.244 { 00:04:58.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:58.244 "dma_device_type": 2 00:04:58.244 } 00:04:58.244 ], 00:04:58.244 "driver_specific": {} 00:04:58.244 }, 00:04:58.244 { 00:04:58.244 "name": "Passthru0", 00:04:58.244 "aliases": [ 00:04:58.244 "f64b3ab5-bf4e-57fa-87c6-0dca7a6f8add" 00:04:58.244 ], 00:04:58.244 "product_name": "passthru", 00:04:58.244 "block_size": 512, 00:04:58.244 "num_blocks": 16384, 00:04:58.244 "uuid": "f64b3ab5-bf4e-57fa-87c6-0dca7a6f8add", 00:04:58.244 "assigned_rate_limits": { 00:04:58.244 "rw_ios_per_sec": 0, 00:04:58.244 "rw_mbytes_per_sec": 0, 00:04:58.244 "r_mbytes_per_sec": 0, 00:04:58.244 "w_mbytes_per_sec": 0 00:04:58.244 }, 00:04:58.244 "claimed": false, 00:04:58.244 "zoned": false, 00:04:58.244 "supported_io_types": { 00:04:58.244 "read": true, 00:04:58.244 "write": true, 00:04:58.244 "unmap": true, 00:04:58.244 "flush": true, 00:04:58.244 "reset": true, 00:04:58.244 "nvme_admin": false, 00:04:58.244 "nvme_io": false, 00:04:58.244 "nvme_io_md": false, 00:04:58.244 "write_zeroes": true, 00:04:58.244 "zcopy": true, 00:04:58.244 "get_zone_info": false, 00:04:58.244 "zone_management": false, 00:04:58.244 "zone_append": false, 00:04:58.244 "compare": false, 00:04:58.244 "compare_and_write": false, 00:04:58.244 "abort": true, 00:04:58.244 "seek_hole": false, 00:04:58.244 "seek_data": false, 00:04:58.244 "copy": true, 00:04:58.244 "nvme_iov_md": false 00:04:58.244 }, 00:04:58.244 "memory_domains": [ 00:04:58.244 { 00:04:58.244 "dma_device_id": "system", 00:04:58.244 "dma_device_type": 1 00:04:58.244 }, 00:04:58.244 { 00:04:58.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:58.244 "dma_device_type": 2 00:04:58.244 } 00:04:58.244 ], 00:04:58.244 "driver_specific": { 00:04:58.244 "passthru": { 00:04:58.244 "name": "Passthru0", 00:04:58.244 "base_bdev_name": "Malloc0" 00:04:58.244 } 00:04:58.244 } 00:04:58.244 } 00:04:58.244 ]' 00:04:58.244 14:57:58 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:58.244 14:57:59 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:58.244 14:57:59 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:58.245 14:57:59 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.245 14:57:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.245 14:57:59 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.245 14:57:59 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:58.245 14:57:59 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.245 14:57:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.513 14:57:59 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.513 14:57:59 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:58.513 14:57:59 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.513 14:57:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.513 14:57:59 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.513 14:57:59 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:58.513 14:57:59 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:58.513 ************************************ 00:04:58.513 END TEST rpc_integrity 00:04:58.513 ************************************ 00:04:58.513 14:57:59 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:58.513 00:04:58.513 real 0m0.396s 00:04:58.513 user 0m0.203s 00:04:58.513 sys 0m0.072s 00:04:58.513 14:57:59 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.513 14:57:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.513 14:57:59 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:58.513 14:57:59 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.513 14:57:59 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.513 14:57:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.513 ************************************ 00:04:58.513 START TEST rpc_plugins 00:04:58.513 ************************************ 00:04:58.513 14:57:59 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:58.514 14:57:59 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:58.514 14:57:59 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.514 14:57:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:58.514 14:57:59 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.514 14:57:59 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:58.514 14:57:59 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:58.514 14:57:59 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.514 14:57:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:58.514 14:57:59 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.514 14:57:59 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:58.514 { 00:04:58.514 "name": "Malloc1", 00:04:58.514 "aliases": [ 00:04:58.514 "0ce61584-0f34-43d8-a7b9-5034a8eedb9f" 00:04:58.514 ], 00:04:58.514 "product_name": "Malloc disk", 00:04:58.514 "block_size": 4096, 00:04:58.514 "num_blocks": 256, 00:04:58.514 "uuid": "0ce61584-0f34-43d8-a7b9-5034a8eedb9f", 00:04:58.514 "assigned_rate_limits": { 00:04:58.514 "rw_ios_per_sec": 0, 00:04:58.514 "rw_mbytes_per_sec": 0, 00:04:58.514 "r_mbytes_per_sec": 0, 00:04:58.514 "w_mbytes_per_sec": 0 00:04:58.514 }, 00:04:58.514 "claimed": false, 00:04:58.514 "zoned": false, 00:04:58.514 "supported_io_types": { 00:04:58.514 "read": true, 00:04:58.514 "write": true, 00:04:58.514 "unmap": true, 00:04:58.514 "flush": true, 00:04:58.514 "reset": true, 00:04:58.514 "nvme_admin": false, 00:04:58.514 "nvme_io": false, 00:04:58.514 "nvme_io_md": false, 00:04:58.514 "write_zeroes": true, 00:04:58.514 "zcopy": true, 00:04:58.514 "get_zone_info": false, 00:04:58.514 "zone_management": false, 00:04:58.514 "zone_append": false, 00:04:58.514 "compare": false, 00:04:58.514 "compare_and_write": false, 00:04:58.514 "abort": true, 00:04:58.514 "seek_hole": false, 00:04:58.514 "seek_data": false, 00:04:58.514 "copy": true, 00:04:58.514 "nvme_iov_md": false 00:04:58.514 }, 00:04:58.514 "memory_domains": [ 00:04:58.514 { 00:04:58.514 "dma_device_id": "system", 00:04:58.514 "dma_device_type": 1 00:04:58.514 }, 00:04:58.515 { 00:04:58.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:58.515 "dma_device_type": 2 00:04:58.515 } 00:04:58.515 ], 00:04:58.515 "driver_specific": {} 00:04:58.515 } 00:04:58.515 ]' 00:04:58.515 14:57:59 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:58.515 14:57:59 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:58.515 14:57:59 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:58.515 14:57:59 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.515 14:57:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:58.792 14:57:59 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.793 14:57:59 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:58.793 14:57:59 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.793 14:57:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:58.793 14:57:59 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.793 14:57:59 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:58.793 14:57:59 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:58.793 ************************************ 00:04:58.793 END TEST rpc_plugins 00:04:58.793 ************************************ 00:04:58.793 14:57:59 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:58.793 00:04:58.793 real 0m0.183s 00:04:58.793 user 0m0.094s 00:04:58.793 sys 0m0.039s 00:04:58.793 14:57:59 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.793 14:57:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:58.793 14:57:59 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:58.793 14:57:59 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.793 14:57:59 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.793 14:57:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.793 ************************************ 00:04:58.793 START TEST rpc_trace_cmd_test 00:04:58.793 ************************************ 00:04:58.793 14:57:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:58.793 14:57:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:58.793 14:57:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:58.793 14:57:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.793 14:57:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:58.793 14:57:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.793 14:57:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:58.793 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57747", 00:04:58.793 "tpoint_group_mask": "0x8", 00:04:58.793 "iscsi_conn": { 00:04:58.793 "mask": "0x2", 00:04:58.793 "tpoint_mask": "0x0" 00:04:58.793 }, 00:04:58.793 "scsi": { 00:04:58.793 "mask": "0x4", 00:04:58.793 "tpoint_mask": "0x0" 00:04:58.793 }, 00:04:58.793 "bdev": { 00:04:58.793 "mask": "0x8", 00:04:58.793 "tpoint_mask": "0xffffffffffffffff" 00:04:58.793 }, 00:04:58.793 "nvmf_rdma": { 00:04:58.793 "mask": "0x10", 00:04:58.793 "tpoint_mask": "0x0" 00:04:58.793 }, 00:04:58.793 "nvmf_tcp": { 00:04:58.793 "mask": "0x20", 00:04:58.793 "tpoint_mask": "0x0" 00:04:58.793 }, 00:04:58.793 "ftl": { 00:04:58.793 "mask": "0x40", 00:04:58.793 "tpoint_mask": "0x0" 00:04:58.793 }, 00:04:58.793 "blobfs": { 00:04:58.793 "mask": "0x80", 00:04:58.793 "tpoint_mask": "0x0" 00:04:58.793 }, 00:04:58.793 "dsa": { 00:04:58.793 "mask": "0x200", 00:04:58.793 "tpoint_mask": "0x0" 00:04:58.793 }, 00:04:58.793 "thread": { 00:04:58.793 "mask": "0x400", 00:04:58.793 "tpoint_mask": "0x0" 00:04:58.793 }, 00:04:58.793 "nvme_pcie": { 00:04:58.793 "mask": "0x800", 00:04:58.793 "tpoint_mask": "0x0" 00:04:58.793 }, 00:04:58.793 "iaa": { 00:04:58.793 "mask": "0x1000", 00:04:58.793 "tpoint_mask": "0x0" 00:04:58.793 }, 00:04:58.793 "nvme_tcp": { 00:04:58.793 "mask": "0x2000", 00:04:58.793 "tpoint_mask": "0x0" 00:04:58.793 }, 00:04:58.793 "bdev_nvme": { 00:04:58.793 "mask": "0x4000", 00:04:58.793 "tpoint_mask": "0x0" 00:04:58.793 }, 00:04:58.793 "sock": { 00:04:58.793 "mask": "0x8000", 00:04:58.793 "tpoint_mask": "0x0" 00:04:58.793 }, 00:04:58.793 "blob": { 00:04:58.793 "mask": "0x10000", 00:04:58.793 "tpoint_mask": "0x0" 00:04:58.793 }, 00:04:58.793 "bdev_raid": { 00:04:58.793 "mask": "0x20000", 00:04:58.793 "tpoint_mask": "0x0" 00:04:58.793 }, 00:04:58.793 "scheduler": { 00:04:58.793 "mask": "0x40000", 00:04:58.793 "tpoint_mask": "0x0" 00:04:58.793 } 00:04:58.793 }' 00:04:58.793 14:57:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:58.793 14:57:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:58.793 14:57:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:58.793 14:57:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:58.793 14:57:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:59.052 14:57:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:59.052 14:57:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:59.052 14:57:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:59.052 14:57:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:59.052 ************************************ 00:04:59.052 END TEST rpc_trace_cmd_test 00:04:59.052 ************************************ 00:04:59.052 14:57:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:59.052 00:04:59.052 real 0m0.255s 00:04:59.052 user 0m0.194s 00:04:59.052 sys 0m0.049s 00:04:59.052 14:57:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.052 14:57:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:59.052 14:57:59 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:59.052 14:57:59 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:59.052 14:57:59 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:59.052 14:57:59 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.052 14:57:59 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.052 14:57:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.052 ************************************ 00:04:59.052 START TEST rpc_daemon_integrity 00:04:59.052 ************************************ 00:04:59.052 14:57:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:59.052 14:57:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:59.052 14:57:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.052 14:57:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.052 14:57:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.052 14:57:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:59.052 14:57:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:59.052 14:57:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:59.052 14:57:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:59.052 14:57:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.052 14:57:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.311 14:57:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.311 14:57:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:59.311 14:57:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:59.311 14:57:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.311 14:57:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.311 14:57:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.311 14:57:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:59.311 { 00:04:59.311 "name": "Malloc2", 00:04:59.311 "aliases": [ 00:04:59.311 "af3789ac-0010-432f-8aff-cd8cc1f15e14" 00:04:59.311 ], 00:04:59.311 "product_name": "Malloc disk", 00:04:59.311 "block_size": 512, 00:04:59.311 "num_blocks": 16384, 00:04:59.311 "uuid": "af3789ac-0010-432f-8aff-cd8cc1f15e14", 00:04:59.311 "assigned_rate_limits": { 00:04:59.311 "rw_ios_per_sec": 0, 00:04:59.311 "rw_mbytes_per_sec": 0, 00:04:59.311 "r_mbytes_per_sec": 0, 00:04:59.311 "w_mbytes_per_sec": 0 00:04:59.311 }, 00:04:59.311 "claimed": false, 00:04:59.311 "zoned": false, 00:04:59.311 "supported_io_types": { 00:04:59.311 "read": true, 00:04:59.311 "write": true, 00:04:59.311 "unmap": true, 00:04:59.311 "flush": true, 00:04:59.311 "reset": true, 00:04:59.311 "nvme_admin": false, 00:04:59.311 "nvme_io": false, 00:04:59.311 "nvme_io_md": false, 00:04:59.311 "write_zeroes": true, 00:04:59.311 "zcopy": true, 00:04:59.311 "get_zone_info": false, 00:04:59.311 "zone_management": false, 00:04:59.311 "zone_append": false, 00:04:59.311 "compare": false, 00:04:59.311 "compare_and_write": false, 00:04:59.311 "abort": true, 00:04:59.311 "seek_hole": false, 00:04:59.311 "seek_data": false, 00:04:59.311 "copy": true, 00:04:59.311 "nvme_iov_md": false 00:04:59.311 }, 00:04:59.311 "memory_domains": [ 00:04:59.311 { 00:04:59.311 "dma_device_id": "system", 00:04:59.311 "dma_device_type": 1 00:04:59.311 }, 00:04:59.311 { 00:04:59.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:59.311 "dma_device_type": 2 00:04:59.311 } 00:04:59.311 ], 00:04:59.311 "driver_specific": {} 00:04:59.311 } 00:04:59.311 ]' 00:04:59.311 14:57:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:59.311 14:57:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:59.311 14:57:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:59.311 14:57:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.311 14:57:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.312 [2024-11-20 14:57:59.992363] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:59.312 [2024-11-20 14:57:59.992478] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:59.312 [2024-11-20 14:57:59.992512] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:04:59.312 [2024-11-20 14:57:59.992529] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:59.312 [2024-11-20 14:57:59.995700] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:59.312 [2024-11-20 14:57:59.995770] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:59.312 Passthru0 00:04:59.312 14:57:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.312 14:57:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:59.312 14:57:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.312 14:57:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.312 14:58:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.312 14:58:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:59.312 { 00:04:59.312 "name": "Malloc2", 00:04:59.312 "aliases": [ 00:04:59.312 "af3789ac-0010-432f-8aff-cd8cc1f15e14" 00:04:59.312 ], 00:04:59.312 "product_name": "Malloc disk", 00:04:59.312 "block_size": 512, 00:04:59.312 "num_blocks": 16384, 00:04:59.312 "uuid": "af3789ac-0010-432f-8aff-cd8cc1f15e14", 00:04:59.312 "assigned_rate_limits": { 00:04:59.312 "rw_ios_per_sec": 0, 00:04:59.312 "rw_mbytes_per_sec": 0, 00:04:59.312 "r_mbytes_per_sec": 0, 00:04:59.312 "w_mbytes_per_sec": 0 00:04:59.312 }, 00:04:59.312 "claimed": true, 00:04:59.312 "claim_type": "exclusive_write", 00:04:59.312 "zoned": false, 00:04:59.312 "supported_io_types": { 00:04:59.312 "read": true, 00:04:59.312 "write": true, 00:04:59.312 "unmap": true, 00:04:59.312 "flush": true, 00:04:59.312 "reset": true, 00:04:59.312 "nvme_admin": false, 00:04:59.312 "nvme_io": false, 00:04:59.312 "nvme_io_md": false, 00:04:59.312 "write_zeroes": true, 00:04:59.312 "zcopy": true, 00:04:59.312 "get_zone_info": false, 00:04:59.312 "zone_management": false, 00:04:59.312 "zone_append": false, 00:04:59.312 "compare": false, 00:04:59.312 "compare_and_write": false, 00:04:59.312 "abort": true, 00:04:59.312 "seek_hole": false, 00:04:59.312 "seek_data": false, 00:04:59.312 "copy": true, 00:04:59.312 "nvme_iov_md": false 00:04:59.312 }, 00:04:59.312 "memory_domains": [ 00:04:59.312 { 00:04:59.312 "dma_device_id": "system", 00:04:59.312 "dma_device_type": 1 00:04:59.312 }, 00:04:59.312 { 00:04:59.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:59.312 "dma_device_type": 2 00:04:59.312 } 00:04:59.312 ], 00:04:59.312 "driver_specific": {} 00:04:59.312 }, 00:04:59.312 { 00:04:59.312 "name": "Passthru0", 00:04:59.312 "aliases": [ 00:04:59.312 "c2f030c4-e0d6-528b-9706-1d74965f926a" 00:04:59.312 ], 00:04:59.312 "product_name": "passthru", 00:04:59.312 "block_size": 512, 00:04:59.312 "num_blocks": 16384, 00:04:59.312 "uuid": "c2f030c4-e0d6-528b-9706-1d74965f926a", 00:04:59.312 "assigned_rate_limits": { 00:04:59.312 "rw_ios_per_sec": 0, 00:04:59.312 "rw_mbytes_per_sec": 0, 00:04:59.312 "r_mbytes_per_sec": 0, 00:04:59.312 "w_mbytes_per_sec": 0 00:04:59.312 }, 00:04:59.312 "claimed": false, 00:04:59.312 "zoned": false, 00:04:59.312 "supported_io_types": { 00:04:59.312 "read": true, 00:04:59.312 "write": true, 00:04:59.312 "unmap": true, 00:04:59.312 "flush": true, 00:04:59.312 "reset": true, 00:04:59.312 "nvme_admin": false, 00:04:59.312 "nvme_io": false, 00:04:59.312 "nvme_io_md": false, 00:04:59.312 "write_zeroes": true, 00:04:59.312 "zcopy": true, 00:04:59.312 "get_zone_info": false, 00:04:59.312 "zone_management": false, 00:04:59.312 "zone_append": false, 00:04:59.312 "compare": false, 00:04:59.312 "compare_and_write": false, 00:04:59.312 "abort": true, 00:04:59.312 "seek_hole": false, 00:04:59.312 "seek_data": false, 00:04:59.312 "copy": true, 00:04:59.312 "nvme_iov_md": false 00:04:59.312 }, 00:04:59.312 "memory_domains": [ 00:04:59.312 { 00:04:59.312 "dma_device_id": "system", 00:04:59.312 "dma_device_type": 1 00:04:59.312 }, 00:04:59.312 { 00:04:59.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:59.312 "dma_device_type": 2 00:04:59.312 } 00:04:59.312 ], 00:04:59.312 "driver_specific": { 00:04:59.312 "passthru": { 00:04:59.312 "name": "Passthru0", 00:04:59.312 "base_bdev_name": "Malloc2" 00:04:59.312 } 00:04:59.312 } 00:04:59.312 } 00:04:59.312 ]' 00:04:59.312 14:58:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:59.312 14:58:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:59.312 14:58:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:59.312 14:58:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.312 14:58:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.312 14:58:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.312 14:58:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:59.312 14:58:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.312 14:58:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.312 14:58:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.572 14:58:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:59.572 14:58:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.572 14:58:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.572 14:58:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.572 14:58:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:59.572 14:58:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:59.572 14:58:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:59.572 ************************************ 00:04:59.572 END TEST rpc_daemon_integrity 00:04:59.572 ************************************ 00:04:59.572 00:04:59.572 real 0m0.390s 00:04:59.572 user 0m0.198s 00:04:59.572 sys 0m0.074s 00:04:59.572 14:58:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.572 14:58:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.572 14:58:00 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:59.572 14:58:00 rpc -- rpc/rpc.sh@84 -- # killprocess 57747 00:04:59.572 14:58:00 rpc -- common/autotest_common.sh@954 -- # '[' -z 57747 ']' 00:04:59.572 14:58:00 rpc -- common/autotest_common.sh@958 -- # kill -0 57747 00:04:59.572 14:58:00 rpc -- common/autotest_common.sh@959 -- # uname 00:04:59.572 14:58:00 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:59.572 14:58:00 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57747 00:04:59.572 killing process with pid 57747 00:04:59.572 14:58:00 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:59.572 14:58:00 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:59.572 14:58:00 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57747' 00:04:59.572 14:58:00 rpc -- common/autotest_common.sh@973 -- # kill 57747 00:04:59.572 14:58:00 rpc -- common/autotest_common.sh@978 -- # wait 57747 00:05:02.863 00:05:02.863 real 0m6.004s 00:05:02.863 user 0m6.414s 00:05:02.863 sys 0m1.213s 00:05:02.863 ************************************ 00:05:02.863 END TEST rpc 00:05:02.863 ************************************ 00:05:02.863 14:58:02 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.863 14:58:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.863 14:58:03 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:02.863 14:58:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.863 14:58:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.863 14:58:03 -- common/autotest_common.sh@10 -- # set +x 00:05:02.863 ************************************ 00:05:02.863 START TEST skip_rpc 00:05:02.863 ************************************ 00:05:02.863 14:58:03 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:02.863 * Looking for test storage... 00:05:02.863 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:02.863 14:58:03 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:02.863 14:58:03 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:02.863 14:58:03 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:02.863 14:58:03 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:02.863 14:58:03 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:02.863 14:58:03 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:02.863 14:58:03 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:02.863 14:58:03 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.863 14:58:03 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:02.863 14:58:03 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:02.863 14:58:03 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:02.863 14:58:03 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:02.863 14:58:03 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:02.863 14:58:03 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:02.863 14:58:03 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:02.863 14:58:03 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:02.863 14:58:03 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:02.863 14:58:03 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:02.863 14:58:03 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.863 14:58:03 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:02.863 14:58:03 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:02.864 14:58:03 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.864 14:58:03 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:02.864 14:58:03 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:02.864 14:58:03 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:02.864 14:58:03 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:02.864 14:58:03 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.864 14:58:03 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:02.864 14:58:03 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:02.864 14:58:03 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:02.864 14:58:03 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:02.864 14:58:03 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:02.864 14:58:03 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.864 14:58:03 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:02.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.864 --rc genhtml_branch_coverage=1 00:05:02.864 --rc genhtml_function_coverage=1 00:05:02.864 --rc genhtml_legend=1 00:05:02.864 --rc geninfo_all_blocks=1 00:05:02.864 --rc geninfo_unexecuted_blocks=1 00:05:02.864 00:05:02.864 ' 00:05:02.864 14:58:03 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:02.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.864 --rc genhtml_branch_coverage=1 00:05:02.864 --rc genhtml_function_coverage=1 00:05:02.864 --rc genhtml_legend=1 00:05:02.864 --rc geninfo_all_blocks=1 00:05:02.864 --rc geninfo_unexecuted_blocks=1 00:05:02.864 00:05:02.864 ' 00:05:02.864 14:58:03 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:02.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.864 --rc genhtml_branch_coverage=1 00:05:02.864 --rc genhtml_function_coverage=1 00:05:02.864 --rc genhtml_legend=1 00:05:02.864 --rc geninfo_all_blocks=1 00:05:02.864 --rc geninfo_unexecuted_blocks=1 00:05:02.864 00:05:02.864 ' 00:05:02.864 14:58:03 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:02.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.864 --rc genhtml_branch_coverage=1 00:05:02.864 --rc genhtml_function_coverage=1 00:05:02.864 --rc genhtml_legend=1 00:05:02.864 --rc geninfo_all_blocks=1 00:05:02.864 --rc geninfo_unexecuted_blocks=1 00:05:02.864 00:05:02.864 ' 00:05:02.864 14:58:03 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:02.864 14:58:03 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:02.864 14:58:03 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:02.864 14:58:03 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.864 14:58:03 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.864 14:58:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.864 ************************************ 00:05:02.864 START TEST skip_rpc 00:05:02.864 ************************************ 00:05:02.864 14:58:03 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:02.864 14:58:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57982 00:05:02.864 14:58:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:02.864 14:58:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:02.864 14:58:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:02.864 [2024-11-20 14:58:03.475443] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:05:02.864 [2024-11-20 14:58:03.475604] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57982 ] 00:05:02.864 [2024-11-20 14:58:03.670483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.122 [2024-11-20 14:58:03.819349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.397 14:58:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:08.397 14:58:08 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:08.397 14:58:08 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:08.397 14:58:08 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:08.397 14:58:08 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:08.397 14:58:08 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:08.397 14:58:08 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:08.397 14:58:08 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:08.397 14:58:08 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.397 14:58:08 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.397 14:58:08 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:08.397 14:58:08 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:08.397 14:58:08 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:08.397 14:58:08 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:08.397 14:58:08 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:08.397 14:58:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:08.397 14:58:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57982 00:05:08.397 14:58:08 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57982 ']' 00:05:08.397 14:58:08 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57982 00:05:08.397 14:58:08 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:08.397 14:58:08 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:08.397 14:58:08 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57982 00:05:08.397 killing process with pid 57982 00:05:08.397 14:58:08 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:08.397 14:58:08 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:08.397 14:58:08 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57982' 00:05:08.397 14:58:08 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57982 00:05:08.397 14:58:08 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57982 00:05:10.302 ************************************ 00:05:10.302 END TEST skip_rpc 00:05:10.302 ************************************ 00:05:10.302 00:05:10.302 real 0m7.770s 00:05:10.302 user 0m7.072s 00:05:10.302 sys 0m0.606s 00:05:10.302 14:58:11 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.302 14:58:11 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.561 14:58:11 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:10.561 14:58:11 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.562 14:58:11 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.562 14:58:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.562 ************************************ 00:05:10.562 START TEST skip_rpc_with_json 00:05:10.562 ************************************ 00:05:10.562 14:58:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:10.562 14:58:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:10.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.562 14:58:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58097 00:05:10.562 14:58:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:10.562 14:58:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:10.562 14:58:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58097 00:05:10.562 14:58:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 58097 ']' 00:05:10.562 14:58:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.562 14:58:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:10.562 14:58:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.562 14:58:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:10.562 14:58:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:10.562 [2024-11-20 14:58:11.333684] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:05:10.562 [2024-11-20 14:58:11.334285] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58097 ] 00:05:10.821 [2024-11-20 14:58:11.532526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.080 [2024-11-20 14:58:11.683572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.038 14:58:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.038 14:58:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:12.038 14:58:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:12.038 14:58:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.038 14:58:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:12.038 [2024-11-20 14:58:12.731597] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:12.038 request: 00:05:12.038 { 00:05:12.038 "trtype": "tcp", 00:05:12.038 "method": "nvmf_get_transports", 00:05:12.038 "req_id": 1 00:05:12.038 } 00:05:12.038 Got JSON-RPC error response 00:05:12.038 response: 00:05:12.038 { 00:05:12.038 "code": -19, 00:05:12.038 "message": "No such device" 00:05:12.038 } 00:05:12.038 14:58:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:12.038 14:58:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:12.038 14:58:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.038 14:58:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:12.038 [2024-11-20 14:58:12.743809] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:12.038 14:58:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.038 14:58:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:12.038 14:58:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.038 14:58:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:12.299 14:58:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.299 14:58:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:12.299 { 00:05:12.299 "subsystems": [ 00:05:12.299 { 00:05:12.299 "subsystem": "fsdev", 00:05:12.299 "config": [ 00:05:12.299 { 00:05:12.299 "method": "fsdev_set_opts", 00:05:12.299 "params": { 00:05:12.299 "fsdev_io_pool_size": 65535, 00:05:12.299 "fsdev_io_cache_size": 256 00:05:12.299 } 00:05:12.299 } 00:05:12.299 ] 00:05:12.299 }, 00:05:12.299 { 00:05:12.299 "subsystem": "keyring", 00:05:12.299 "config": [] 00:05:12.299 }, 00:05:12.299 { 00:05:12.299 "subsystem": "iobuf", 00:05:12.299 "config": [ 00:05:12.299 { 00:05:12.299 "method": "iobuf_set_options", 00:05:12.299 "params": { 00:05:12.299 "small_pool_count": 8192, 00:05:12.299 "large_pool_count": 1024, 00:05:12.299 "small_bufsize": 8192, 00:05:12.299 "large_bufsize": 135168, 00:05:12.299 "enable_numa": false 00:05:12.299 } 00:05:12.299 } 00:05:12.299 ] 00:05:12.299 }, 00:05:12.299 { 00:05:12.299 "subsystem": "sock", 00:05:12.299 "config": [ 00:05:12.299 { 00:05:12.299 "method": "sock_set_default_impl", 00:05:12.299 "params": { 00:05:12.299 "impl_name": "posix" 00:05:12.299 } 00:05:12.299 }, 00:05:12.299 { 00:05:12.299 "method": "sock_impl_set_options", 00:05:12.299 "params": { 00:05:12.299 "impl_name": "ssl", 00:05:12.299 "recv_buf_size": 4096, 00:05:12.299 "send_buf_size": 4096, 00:05:12.299 "enable_recv_pipe": true, 00:05:12.299 "enable_quickack": false, 00:05:12.299 "enable_placement_id": 0, 00:05:12.299 "enable_zerocopy_send_server": true, 00:05:12.299 "enable_zerocopy_send_client": false, 00:05:12.299 "zerocopy_threshold": 0, 00:05:12.299 "tls_version": 0, 00:05:12.299 "enable_ktls": false 00:05:12.299 } 00:05:12.299 }, 00:05:12.299 { 00:05:12.299 "method": "sock_impl_set_options", 00:05:12.299 "params": { 00:05:12.299 "impl_name": "posix", 00:05:12.299 "recv_buf_size": 2097152, 00:05:12.299 "send_buf_size": 2097152, 00:05:12.299 "enable_recv_pipe": true, 00:05:12.299 "enable_quickack": false, 00:05:12.300 "enable_placement_id": 0, 00:05:12.300 "enable_zerocopy_send_server": true, 00:05:12.300 "enable_zerocopy_send_client": false, 00:05:12.300 "zerocopy_threshold": 0, 00:05:12.300 "tls_version": 0, 00:05:12.300 "enable_ktls": false 00:05:12.300 } 00:05:12.300 } 00:05:12.300 ] 00:05:12.300 }, 00:05:12.300 { 00:05:12.300 "subsystem": "vmd", 00:05:12.300 "config": [] 00:05:12.300 }, 00:05:12.300 { 00:05:12.300 "subsystem": "accel", 00:05:12.300 "config": [ 00:05:12.300 { 00:05:12.300 "method": "accel_set_options", 00:05:12.300 "params": { 00:05:12.300 "small_cache_size": 128, 00:05:12.300 "large_cache_size": 16, 00:05:12.300 "task_count": 2048, 00:05:12.300 "sequence_count": 2048, 00:05:12.300 "buf_count": 2048 00:05:12.300 } 00:05:12.300 } 00:05:12.300 ] 00:05:12.300 }, 00:05:12.300 { 00:05:12.300 "subsystem": "bdev", 00:05:12.300 "config": [ 00:05:12.300 { 00:05:12.300 "method": "bdev_set_options", 00:05:12.300 "params": { 00:05:12.300 "bdev_io_pool_size": 65535, 00:05:12.300 "bdev_io_cache_size": 256, 00:05:12.300 "bdev_auto_examine": true, 00:05:12.300 "iobuf_small_cache_size": 128, 00:05:12.300 "iobuf_large_cache_size": 16 00:05:12.300 } 00:05:12.300 }, 00:05:12.300 { 00:05:12.300 "method": "bdev_raid_set_options", 00:05:12.300 "params": { 00:05:12.300 "process_window_size_kb": 1024, 00:05:12.300 "process_max_bandwidth_mb_sec": 0 00:05:12.300 } 00:05:12.300 }, 00:05:12.300 { 00:05:12.300 "method": "bdev_iscsi_set_options", 00:05:12.300 "params": { 00:05:12.300 "timeout_sec": 30 00:05:12.300 } 00:05:12.300 }, 00:05:12.300 { 00:05:12.300 "method": "bdev_nvme_set_options", 00:05:12.300 "params": { 00:05:12.300 "action_on_timeout": "none", 00:05:12.300 "timeout_us": 0, 00:05:12.300 "timeout_admin_us": 0, 00:05:12.300 "keep_alive_timeout_ms": 10000, 00:05:12.300 "arbitration_burst": 0, 00:05:12.300 "low_priority_weight": 0, 00:05:12.300 "medium_priority_weight": 0, 00:05:12.300 "high_priority_weight": 0, 00:05:12.300 "nvme_adminq_poll_period_us": 10000, 00:05:12.300 "nvme_ioq_poll_period_us": 0, 00:05:12.300 "io_queue_requests": 0, 00:05:12.300 "delay_cmd_submit": true, 00:05:12.300 "transport_retry_count": 4, 00:05:12.300 "bdev_retry_count": 3, 00:05:12.300 "transport_ack_timeout": 0, 00:05:12.300 "ctrlr_loss_timeout_sec": 0, 00:05:12.300 "reconnect_delay_sec": 0, 00:05:12.300 "fast_io_fail_timeout_sec": 0, 00:05:12.300 "disable_auto_failback": false, 00:05:12.300 "generate_uuids": false, 00:05:12.300 "transport_tos": 0, 00:05:12.300 "nvme_error_stat": false, 00:05:12.300 "rdma_srq_size": 0, 00:05:12.300 "io_path_stat": false, 00:05:12.300 "allow_accel_sequence": false, 00:05:12.300 "rdma_max_cq_size": 0, 00:05:12.300 "rdma_cm_event_timeout_ms": 0, 00:05:12.300 "dhchap_digests": [ 00:05:12.300 "sha256", 00:05:12.300 "sha384", 00:05:12.300 "sha512" 00:05:12.300 ], 00:05:12.300 "dhchap_dhgroups": [ 00:05:12.300 "null", 00:05:12.300 "ffdhe2048", 00:05:12.300 "ffdhe3072", 00:05:12.300 "ffdhe4096", 00:05:12.300 "ffdhe6144", 00:05:12.300 "ffdhe8192" 00:05:12.300 ] 00:05:12.300 } 00:05:12.300 }, 00:05:12.300 { 00:05:12.300 "method": "bdev_nvme_set_hotplug", 00:05:12.300 "params": { 00:05:12.300 "period_us": 100000, 00:05:12.300 "enable": false 00:05:12.300 } 00:05:12.300 }, 00:05:12.300 { 00:05:12.300 "method": "bdev_wait_for_examine" 00:05:12.300 } 00:05:12.300 ] 00:05:12.300 }, 00:05:12.300 { 00:05:12.300 "subsystem": "scsi", 00:05:12.300 "config": null 00:05:12.300 }, 00:05:12.300 { 00:05:12.300 "subsystem": "scheduler", 00:05:12.300 "config": [ 00:05:12.300 { 00:05:12.300 "method": "framework_set_scheduler", 00:05:12.300 "params": { 00:05:12.300 "name": "static" 00:05:12.300 } 00:05:12.300 } 00:05:12.300 ] 00:05:12.300 }, 00:05:12.300 { 00:05:12.300 "subsystem": "vhost_scsi", 00:05:12.300 "config": [] 00:05:12.300 }, 00:05:12.300 { 00:05:12.300 "subsystem": "vhost_blk", 00:05:12.300 "config": [] 00:05:12.300 }, 00:05:12.300 { 00:05:12.300 "subsystem": "ublk", 00:05:12.300 "config": [] 00:05:12.300 }, 00:05:12.300 { 00:05:12.300 "subsystem": "nbd", 00:05:12.300 "config": [] 00:05:12.300 }, 00:05:12.300 { 00:05:12.300 "subsystem": "nvmf", 00:05:12.300 "config": [ 00:05:12.300 { 00:05:12.300 "method": "nvmf_set_config", 00:05:12.300 "params": { 00:05:12.300 "discovery_filter": "match_any", 00:05:12.300 "admin_cmd_passthru": { 00:05:12.300 "identify_ctrlr": false 00:05:12.300 }, 00:05:12.300 "dhchap_digests": [ 00:05:12.300 "sha256", 00:05:12.300 "sha384", 00:05:12.300 "sha512" 00:05:12.300 ], 00:05:12.300 "dhchap_dhgroups": [ 00:05:12.300 "null", 00:05:12.300 "ffdhe2048", 00:05:12.300 "ffdhe3072", 00:05:12.300 "ffdhe4096", 00:05:12.300 "ffdhe6144", 00:05:12.300 "ffdhe8192" 00:05:12.300 ] 00:05:12.300 } 00:05:12.300 }, 00:05:12.300 { 00:05:12.300 "method": "nvmf_set_max_subsystems", 00:05:12.300 "params": { 00:05:12.300 "max_subsystems": 1024 00:05:12.300 } 00:05:12.300 }, 00:05:12.300 { 00:05:12.300 "method": "nvmf_set_crdt", 00:05:12.300 "params": { 00:05:12.300 "crdt1": 0, 00:05:12.300 "crdt2": 0, 00:05:12.300 "crdt3": 0 00:05:12.300 } 00:05:12.300 }, 00:05:12.300 { 00:05:12.300 "method": "nvmf_create_transport", 00:05:12.300 "params": { 00:05:12.300 "trtype": "TCP", 00:05:12.300 "max_queue_depth": 128, 00:05:12.300 "max_io_qpairs_per_ctrlr": 127, 00:05:12.300 "in_capsule_data_size": 4096, 00:05:12.300 "max_io_size": 131072, 00:05:12.300 "io_unit_size": 131072, 00:05:12.300 "max_aq_depth": 128, 00:05:12.300 "num_shared_buffers": 511, 00:05:12.300 "buf_cache_size": 4294967295, 00:05:12.300 "dif_insert_or_strip": false, 00:05:12.300 "zcopy": false, 00:05:12.300 "c2h_success": true, 00:05:12.300 "sock_priority": 0, 00:05:12.300 "abort_timeout_sec": 1, 00:05:12.300 "ack_timeout": 0, 00:05:12.300 "data_wr_pool_size": 0 00:05:12.300 } 00:05:12.300 } 00:05:12.300 ] 00:05:12.300 }, 00:05:12.300 { 00:05:12.300 "subsystem": "iscsi", 00:05:12.300 "config": [ 00:05:12.300 { 00:05:12.300 "method": "iscsi_set_options", 00:05:12.300 "params": { 00:05:12.300 "node_base": "iqn.2016-06.io.spdk", 00:05:12.300 "max_sessions": 128, 00:05:12.300 "max_connections_per_session": 2, 00:05:12.300 "max_queue_depth": 64, 00:05:12.300 "default_time2wait": 2, 00:05:12.300 "default_time2retain": 20, 00:05:12.300 "first_burst_length": 8192, 00:05:12.300 "immediate_data": true, 00:05:12.300 "allow_duplicated_isid": false, 00:05:12.300 "error_recovery_level": 0, 00:05:12.300 "nop_timeout": 60, 00:05:12.300 "nop_in_interval": 30, 00:05:12.300 "disable_chap": false, 00:05:12.300 "require_chap": false, 00:05:12.300 "mutual_chap": false, 00:05:12.300 "chap_group": 0, 00:05:12.300 "max_large_datain_per_connection": 64, 00:05:12.300 "max_r2t_per_connection": 4, 00:05:12.300 "pdu_pool_size": 36864, 00:05:12.300 "immediate_data_pool_size": 16384, 00:05:12.300 "data_out_pool_size": 2048 00:05:12.300 } 00:05:12.300 } 00:05:12.300 ] 00:05:12.300 } 00:05:12.300 ] 00:05:12.300 } 00:05:12.300 14:58:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:12.300 14:58:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58097 00:05:12.300 14:58:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58097 ']' 00:05:12.300 14:58:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58097 00:05:12.300 14:58:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:12.300 14:58:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:12.300 14:58:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58097 00:05:12.300 killing process with pid 58097 00:05:12.300 14:58:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:12.300 14:58:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:12.300 14:58:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58097' 00:05:12.300 14:58:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58097 00:05:12.300 14:58:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58097 00:05:14.866 14:58:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58153 00:05:14.866 14:58:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:14.866 14:58:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:20.132 14:58:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58153 00:05:20.132 14:58:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58153 ']' 00:05:20.132 14:58:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58153 00:05:20.132 14:58:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:20.132 14:58:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:20.132 14:58:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58153 00:05:20.132 killing process with pid 58153 00:05:20.132 14:58:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:20.132 14:58:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:20.132 14:58:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58153' 00:05:20.132 14:58:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58153 00:05:20.132 14:58:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58153 00:05:22.665 14:58:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:22.666 14:58:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:22.666 00:05:22.666 real 0m12.290s 00:05:22.666 user 0m11.317s 00:05:22.666 sys 0m1.310s 00:05:22.666 ************************************ 00:05:22.666 END TEST skip_rpc_with_json 00:05:22.666 ************************************ 00:05:22.666 14:58:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.666 14:58:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:22.925 14:58:23 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:22.925 14:58:23 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:22.925 14:58:23 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.925 14:58:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.925 ************************************ 00:05:22.925 START TEST skip_rpc_with_delay 00:05:22.925 ************************************ 00:05:22.925 14:58:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:22.925 14:58:23 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:22.925 14:58:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:22.925 14:58:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:22.925 14:58:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:22.925 14:58:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:22.925 14:58:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:22.925 14:58:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:22.925 14:58:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:22.925 14:58:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:22.925 14:58:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:22.925 14:58:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:22.925 14:58:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:22.925 [2024-11-20 14:58:23.672756] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:22.925 14:58:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:22.925 14:58:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:22.925 14:58:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:22.925 14:58:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:22.925 00:05:22.925 real 0m0.205s 00:05:22.925 user 0m0.093s 00:05:22.925 sys 0m0.110s 00:05:22.925 14:58:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.925 ************************************ 00:05:22.925 END TEST skip_rpc_with_delay 00:05:22.925 ************************************ 00:05:22.925 14:58:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:23.184 14:58:23 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:23.184 14:58:23 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:23.184 14:58:23 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:23.184 14:58:23 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.184 14:58:23 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.184 14:58:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.184 ************************************ 00:05:23.184 START TEST exit_on_failed_rpc_init 00:05:23.184 ************************************ 00:05:23.184 14:58:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:23.184 14:58:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:23.184 14:58:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58292 00:05:23.184 14:58:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58292 00:05:23.184 14:58:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 58292 ']' 00:05:23.184 14:58:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.184 14:58:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.184 14:58:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.184 14:58:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.184 14:58:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:23.184 [2024-11-20 14:58:23.954436] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:05:23.184 [2024-11-20 14:58:23.954617] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58292 ] 00:05:23.443 [2024-11-20 14:58:24.149988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.702 [2024-11-20 14:58:24.311312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.639 14:58:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.639 14:58:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:24.639 14:58:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:24.639 14:58:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:24.639 14:58:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:24.639 14:58:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:24.639 14:58:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:24.639 14:58:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:24.639 14:58:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:24.639 14:58:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:24.639 14:58:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:24.639 14:58:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:24.639 14:58:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:24.639 14:58:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:24.639 14:58:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:24.897 [2024-11-20 14:58:25.544520] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:05:24.897 [2024-11-20 14:58:25.545012] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58316 ] 00:05:24.897 [2024-11-20 14:58:25.727432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.155 [2024-11-20 14:58:25.867105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.155 [2024-11-20 14:58:25.867245] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:25.155 [2024-11-20 14:58:25.867265] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:25.155 [2024-11-20 14:58:25.867291] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:25.414 14:58:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:25.414 14:58:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:25.414 14:58:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:25.414 14:58:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:25.414 14:58:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:25.414 14:58:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:25.414 14:58:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:25.414 14:58:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58292 00:05:25.414 14:58:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 58292 ']' 00:05:25.414 14:58:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 58292 00:05:25.414 14:58:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:25.414 14:58:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:25.414 14:58:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58292 00:05:25.414 killing process with pid 58292 00:05:25.414 14:58:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:25.414 14:58:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:25.414 14:58:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58292' 00:05:25.414 14:58:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 58292 00:05:25.414 14:58:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 58292 00:05:28.703 00:05:28.703 real 0m5.178s 00:05:28.703 user 0m5.431s 00:05:28.703 sys 0m0.902s 00:05:28.703 14:58:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.703 ************************************ 00:05:28.703 END TEST exit_on_failed_rpc_init 00:05:28.703 ************************************ 00:05:28.703 14:58:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:28.703 14:58:29 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:28.703 ************************************ 00:05:28.703 END TEST skip_rpc 00:05:28.703 ************************************ 00:05:28.703 00:05:28.703 real 0m25.974s 00:05:28.703 user 0m24.123s 00:05:28.703 sys 0m3.245s 00:05:28.703 14:58:29 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.703 14:58:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.703 14:58:29 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:28.703 14:58:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.703 14:58:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.703 14:58:29 -- common/autotest_common.sh@10 -- # set +x 00:05:28.703 ************************************ 00:05:28.703 START TEST rpc_client 00:05:28.703 ************************************ 00:05:28.703 14:58:29 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:28.703 * Looking for test storage... 00:05:28.703 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:28.703 14:58:29 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:28.703 14:58:29 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:28.703 14:58:29 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:28.703 14:58:29 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:28.704 14:58:29 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:28.704 14:58:29 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:28.704 14:58:29 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:28.704 14:58:29 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.704 14:58:29 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:28.704 14:58:29 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:28.704 14:58:29 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:28.704 14:58:29 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:28.704 14:58:29 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:28.704 14:58:29 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:28.704 14:58:29 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:28.704 14:58:29 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:28.704 14:58:29 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:28.704 14:58:29 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:28.704 14:58:29 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.704 14:58:29 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:28.704 14:58:29 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:28.704 14:58:29 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.704 14:58:29 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:28.704 14:58:29 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:28.704 14:58:29 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:28.704 14:58:29 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:28.704 14:58:29 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.704 14:58:29 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:28.704 14:58:29 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:28.704 14:58:29 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:28.704 14:58:29 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:28.704 14:58:29 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:28.704 14:58:29 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.704 14:58:29 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:28.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.704 --rc genhtml_branch_coverage=1 00:05:28.704 --rc genhtml_function_coverage=1 00:05:28.704 --rc genhtml_legend=1 00:05:28.704 --rc geninfo_all_blocks=1 00:05:28.704 --rc geninfo_unexecuted_blocks=1 00:05:28.704 00:05:28.704 ' 00:05:28.704 14:58:29 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:28.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.704 --rc genhtml_branch_coverage=1 00:05:28.704 --rc genhtml_function_coverage=1 00:05:28.704 --rc genhtml_legend=1 00:05:28.704 --rc geninfo_all_blocks=1 00:05:28.704 --rc geninfo_unexecuted_blocks=1 00:05:28.704 00:05:28.704 ' 00:05:28.704 14:58:29 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:28.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.704 --rc genhtml_branch_coverage=1 00:05:28.704 --rc genhtml_function_coverage=1 00:05:28.704 --rc genhtml_legend=1 00:05:28.704 --rc geninfo_all_blocks=1 00:05:28.704 --rc geninfo_unexecuted_blocks=1 00:05:28.704 00:05:28.704 ' 00:05:28.704 14:58:29 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:28.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.704 --rc genhtml_branch_coverage=1 00:05:28.704 --rc genhtml_function_coverage=1 00:05:28.704 --rc genhtml_legend=1 00:05:28.704 --rc geninfo_all_blocks=1 00:05:28.704 --rc geninfo_unexecuted_blocks=1 00:05:28.704 00:05:28.704 ' 00:05:28.704 14:58:29 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:28.704 OK 00:05:28.704 14:58:29 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:28.704 00:05:28.704 real 0m0.332s 00:05:28.704 user 0m0.179s 00:05:28.704 sys 0m0.168s 00:05:28.704 ************************************ 00:05:28.704 END TEST rpc_client 00:05:28.704 ************************************ 00:05:28.704 14:58:29 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.704 14:58:29 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:28.704 14:58:29 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:28.704 14:58:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.704 14:58:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.704 14:58:29 -- common/autotest_common.sh@10 -- # set +x 00:05:28.704 ************************************ 00:05:28.704 START TEST json_config 00:05:28.704 ************************************ 00:05:28.704 14:58:29 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:28.964 14:58:29 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:28.964 14:58:29 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:28.964 14:58:29 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:28.964 14:58:29 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:28.964 14:58:29 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:28.964 14:58:29 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:28.964 14:58:29 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:28.964 14:58:29 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.964 14:58:29 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:28.964 14:58:29 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:28.964 14:58:29 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:28.964 14:58:29 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:28.964 14:58:29 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:28.964 14:58:29 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:28.964 14:58:29 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:28.964 14:58:29 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:28.964 14:58:29 json_config -- scripts/common.sh@345 -- # : 1 00:05:28.964 14:58:29 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:28.964 14:58:29 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.964 14:58:29 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:28.964 14:58:29 json_config -- scripts/common.sh@353 -- # local d=1 00:05:28.964 14:58:29 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.964 14:58:29 json_config -- scripts/common.sh@355 -- # echo 1 00:05:28.964 14:58:29 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:28.964 14:58:29 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:28.964 14:58:29 json_config -- scripts/common.sh@353 -- # local d=2 00:05:28.964 14:58:29 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.964 14:58:29 json_config -- scripts/common.sh@355 -- # echo 2 00:05:28.964 14:58:29 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:28.964 14:58:29 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:28.964 14:58:29 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:28.964 14:58:29 json_config -- scripts/common.sh@368 -- # return 0 00:05:28.964 14:58:29 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.964 14:58:29 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:28.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.964 --rc genhtml_branch_coverage=1 00:05:28.964 --rc genhtml_function_coverage=1 00:05:28.964 --rc genhtml_legend=1 00:05:28.964 --rc geninfo_all_blocks=1 00:05:28.964 --rc geninfo_unexecuted_blocks=1 00:05:28.964 00:05:28.964 ' 00:05:28.964 14:58:29 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:28.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.964 --rc genhtml_branch_coverage=1 00:05:28.964 --rc genhtml_function_coverage=1 00:05:28.964 --rc genhtml_legend=1 00:05:28.964 --rc geninfo_all_blocks=1 00:05:28.964 --rc geninfo_unexecuted_blocks=1 00:05:28.964 00:05:28.964 ' 00:05:28.964 14:58:29 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:28.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.964 --rc genhtml_branch_coverage=1 00:05:28.964 --rc genhtml_function_coverage=1 00:05:28.964 --rc genhtml_legend=1 00:05:28.964 --rc geninfo_all_blocks=1 00:05:28.964 --rc geninfo_unexecuted_blocks=1 00:05:28.964 00:05:28.964 ' 00:05:28.964 14:58:29 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:28.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.964 --rc genhtml_branch_coverage=1 00:05:28.964 --rc genhtml_function_coverage=1 00:05:28.964 --rc genhtml_legend=1 00:05:28.964 --rc geninfo_all_blocks=1 00:05:28.964 --rc geninfo_unexecuted_blocks=1 00:05:28.964 00:05:28.964 ' 00:05:28.964 14:58:29 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:28.964 14:58:29 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:28.964 14:58:29 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:28.964 14:58:29 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:28.964 14:58:29 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:28.964 14:58:29 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:28.964 14:58:29 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:28.964 14:58:29 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:28.964 14:58:29 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:28.964 14:58:29 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:28.964 14:58:29 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:28.964 14:58:29 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:28.964 14:58:29 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d5dbffa3-8145-4a26-bb17-cc33a438f929 00:05:28.964 14:58:29 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=d5dbffa3-8145-4a26-bb17-cc33a438f929 00:05:28.964 14:58:29 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:28.964 14:58:29 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:28.964 14:58:29 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:28.964 14:58:29 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:28.964 14:58:29 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:28.964 14:58:29 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:28.964 14:58:29 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:28.964 14:58:29 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:28.964 14:58:29 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:28.964 14:58:29 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.965 14:58:29 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.965 14:58:29 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.965 14:58:29 json_config -- paths/export.sh@5 -- # export PATH 00:05:28.965 14:58:29 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.965 14:58:29 json_config -- nvmf/common.sh@51 -- # : 0 00:05:28.965 14:58:29 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:28.965 14:58:29 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:28.965 14:58:29 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:28.965 14:58:29 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:28.965 14:58:29 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:28.965 14:58:29 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:28.965 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:28.965 14:58:29 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:28.965 14:58:29 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:28.965 14:58:29 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:28.965 14:58:29 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:28.965 14:58:29 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:28.965 14:58:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:28.965 WARNING: No tests are enabled so not running JSON configuration tests 00:05:28.965 14:58:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:28.965 14:58:29 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:28.965 14:58:29 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:28.965 14:58:29 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:28.965 ************************************ 00:05:28.965 END TEST json_config 00:05:28.965 ************************************ 00:05:28.965 00:05:28.965 real 0m0.202s 00:05:28.965 user 0m0.109s 00:05:28.965 sys 0m0.101s 00:05:28.965 14:58:29 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.965 14:58:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.965 14:58:29 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:28.965 14:58:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.965 14:58:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.965 14:58:29 -- common/autotest_common.sh@10 -- # set +x 00:05:28.965 ************************************ 00:05:28.965 START TEST json_config_extra_key 00:05:28.965 ************************************ 00:05:28.965 14:58:29 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:29.224 14:58:29 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:29.224 14:58:29 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:29.224 14:58:29 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:29.224 14:58:29 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:29.224 14:58:29 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:29.224 14:58:29 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:29.224 14:58:29 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:29.224 14:58:29 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:29.224 14:58:29 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:29.224 14:58:29 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:29.224 14:58:29 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:29.224 14:58:29 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:29.224 14:58:29 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:29.224 14:58:29 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:29.224 14:58:29 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:29.224 14:58:29 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:29.224 14:58:29 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:29.224 14:58:29 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:29.224 14:58:29 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:29.224 14:58:29 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:29.224 14:58:29 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:29.224 14:58:29 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:29.224 14:58:29 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:29.224 14:58:29 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:29.224 14:58:29 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:29.224 14:58:29 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:29.224 14:58:29 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:29.224 14:58:29 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:29.224 14:58:29 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:29.224 14:58:29 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:29.224 14:58:29 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:29.224 14:58:29 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:29.224 14:58:29 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:29.224 14:58:29 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:29.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.224 --rc genhtml_branch_coverage=1 00:05:29.224 --rc genhtml_function_coverage=1 00:05:29.224 --rc genhtml_legend=1 00:05:29.224 --rc geninfo_all_blocks=1 00:05:29.224 --rc geninfo_unexecuted_blocks=1 00:05:29.224 00:05:29.224 ' 00:05:29.224 14:58:29 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:29.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.224 --rc genhtml_branch_coverage=1 00:05:29.224 --rc genhtml_function_coverage=1 00:05:29.224 --rc genhtml_legend=1 00:05:29.224 --rc geninfo_all_blocks=1 00:05:29.224 --rc geninfo_unexecuted_blocks=1 00:05:29.224 00:05:29.225 ' 00:05:29.225 14:58:29 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:29.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.225 --rc genhtml_branch_coverage=1 00:05:29.225 --rc genhtml_function_coverage=1 00:05:29.225 --rc genhtml_legend=1 00:05:29.225 --rc geninfo_all_blocks=1 00:05:29.225 --rc geninfo_unexecuted_blocks=1 00:05:29.225 00:05:29.225 ' 00:05:29.225 14:58:29 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:29.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.225 --rc genhtml_branch_coverage=1 00:05:29.225 --rc genhtml_function_coverage=1 00:05:29.225 --rc genhtml_legend=1 00:05:29.225 --rc geninfo_all_blocks=1 00:05:29.225 --rc geninfo_unexecuted_blocks=1 00:05:29.225 00:05:29.225 ' 00:05:29.225 14:58:29 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:29.225 14:58:29 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:29.225 14:58:29 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:29.225 14:58:29 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:29.225 14:58:29 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:29.225 14:58:29 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:29.225 14:58:29 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:29.225 14:58:29 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:29.225 14:58:29 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:29.225 14:58:29 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:29.225 14:58:29 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:29.225 14:58:29 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:29.225 14:58:29 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d5dbffa3-8145-4a26-bb17-cc33a438f929 00:05:29.225 14:58:29 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=d5dbffa3-8145-4a26-bb17-cc33a438f929 00:05:29.225 14:58:29 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:29.225 14:58:29 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:29.225 14:58:29 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:29.225 14:58:29 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:29.225 14:58:29 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:29.225 14:58:29 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:29.225 14:58:29 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:29.225 14:58:29 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:29.225 14:58:29 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:29.225 14:58:29 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.225 14:58:29 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.225 14:58:29 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.225 14:58:29 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:29.225 14:58:29 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.225 14:58:29 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:29.225 14:58:29 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:29.225 14:58:29 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:29.225 14:58:29 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:29.225 14:58:29 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:29.225 14:58:29 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:29.225 14:58:29 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:29.225 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:29.225 14:58:29 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:29.225 14:58:29 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:29.225 14:58:29 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:29.225 14:58:29 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:29.225 14:58:29 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:29.225 14:58:29 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:29.225 14:58:29 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:29.225 14:58:29 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:29.225 14:58:29 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:29.225 14:58:29 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:29.225 14:58:29 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:29.225 INFO: launching applications... 00:05:29.225 Waiting for target to run... 00:05:29.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:29.225 14:58:29 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:29.225 14:58:29 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:29.225 14:58:29 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:29.225 14:58:29 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:29.225 14:58:29 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:29.225 14:58:29 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:29.225 14:58:29 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:29.225 14:58:29 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:29.225 14:58:29 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:29.225 14:58:29 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:29.225 14:58:29 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:29.225 14:58:29 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58531 00:05:29.225 14:58:29 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:29.225 14:58:29 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58531 /var/tmp/spdk_tgt.sock 00:05:29.225 14:58:29 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 58531 ']' 00:05:29.225 14:58:29 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:29.225 14:58:29 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.226 14:58:29 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:29.226 14:58:29 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:29.226 14:58:29 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.226 14:58:29 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:29.484 [2024-11-20 14:58:30.112050] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:05:29.484 [2024-11-20 14:58:30.112998] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58531 ] 00:05:30.050 [2024-11-20 14:58:30.586176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.050 [2024-11-20 14:58:30.726764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.047 00:05:31.047 INFO: shutting down applications... 00:05:31.047 14:58:31 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.047 14:58:31 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:31.047 14:58:31 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:31.047 14:58:31 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:31.047 14:58:31 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:31.047 14:58:31 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:31.047 14:58:31 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:31.047 14:58:31 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58531 ]] 00:05:31.047 14:58:31 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58531 00:05:31.047 14:58:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:31.047 14:58:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:31.047 14:58:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58531 00:05:31.047 14:58:31 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:31.317 14:58:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:31.317 14:58:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:31.317 14:58:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58531 00:05:31.317 14:58:32 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:31.883 14:58:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:31.883 14:58:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:31.883 14:58:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58531 00:05:31.883 14:58:32 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:32.449 14:58:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:32.449 14:58:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:32.449 14:58:33 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58531 00:05:32.449 14:58:33 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:33.015 14:58:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:33.015 14:58:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:33.015 14:58:33 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58531 00:05:33.015 14:58:33 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:33.273 14:58:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:33.273 14:58:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:33.273 14:58:34 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58531 00:05:33.273 14:58:34 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:33.840 14:58:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:33.840 14:58:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:33.840 14:58:34 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58531 00:05:33.840 14:58:34 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:34.405 14:58:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:34.405 14:58:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:34.405 14:58:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58531 00:05:34.405 14:58:35 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:34.405 14:58:35 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:34.405 14:58:35 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:34.405 SPDK target shutdown done 00:05:34.405 Success 00:05:34.405 14:58:35 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:34.405 14:58:35 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:34.405 00:05:34.405 real 0m5.302s 00:05:34.405 user 0m4.777s 00:05:34.405 sys 0m0.758s 00:05:34.405 14:58:35 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.405 ************************************ 00:05:34.405 END TEST json_config_extra_key 00:05:34.405 ************************************ 00:05:34.405 14:58:35 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:34.406 14:58:35 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:34.406 14:58:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.406 14:58:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.406 14:58:35 -- common/autotest_common.sh@10 -- # set +x 00:05:34.406 ************************************ 00:05:34.406 START TEST alias_rpc 00:05:34.406 ************************************ 00:05:34.406 14:58:35 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:34.665 * Looking for test storage... 00:05:34.665 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:34.665 14:58:35 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:34.665 14:58:35 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:34.665 14:58:35 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:34.665 14:58:35 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:34.665 14:58:35 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:34.665 14:58:35 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:34.665 14:58:35 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:34.665 14:58:35 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.665 14:58:35 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:34.665 14:58:35 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:34.665 14:58:35 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:34.665 14:58:35 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:34.665 14:58:35 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:34.665 14:58:35 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:34.665 14:58:35 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:34.665 14:58:35 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:34.665 14:58:35 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:34.665 14:58:35 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:34.665 14:58:35 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.665 14:58:35 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:34.665 14:58:35 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:34.665 14:58:35 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.665 14:58:35 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:34.665 14:58:35 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:34.665 14:58:35 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:34.665 14:58:35 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:34.665 14:58:35 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.665 14:58:35 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:34.665 14:58:35 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:34.665 14:58:35 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:34.665 14:58:35 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:34.665 14:58:35 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:34.665 14:58:35 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.665 14:58:35 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:34.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.665 --rc genhtml_branch_coverage=1 00:05:34.665 --rc genhtml_function_coverage=1 00:05:34.665 --rc genhtml_legend=1 00:05:34.665 --rc geninfo_all_blocks=1 00:05:34.665 --rc geninfo_unexecuted_blocks=1 00:05:34.665 00:05:34.665 ' 00:05:34.665 14:58:35 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:34.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.665 --rc genhtml_branch_coverage=1 00:05:34.665 --rc genhtml_function_coverage=1 00:05:34.665 --rc genhtml_legend=1 00:05:34.665 --rc geninfo_all_blocks=1 00:05:34.665 --rc geninfo_unexecuted_blocks=1 00:05:34.665 00:05:34.665 ' 00:05:34.665 14:58:35 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:34.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.665 --rc genhtml_branch_coverage=1 00:05:34.665 --rc genhtml_function_coverage=1 00:05:34.665 --rc genhtml_legend=1 00:05:34.665 --rc geninfo_all_blocks=1 00:05:34.665 --rc geninfo_unexecuted_blocks=1 00:05:34.665 00:05:34.665 ' 00:05:34.665 14:58:35 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:34.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.665 --rc genhtml_branch_coverage=1 00:05:34.665 --rc genhtml_function_coverage=1 00:05:34.665 --rc genhtml_legend=1 00:05:34.665 --rc geninfo_all_blocks=1 00:05:34.665 --rc geninfo_unexecuted_blocks=1 00:05:34.665 00:05:34.665 ' 00:05:34.665 14:58:35 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:34.665 14:58:35 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58649 00:05:34.665 14:58:35 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:34.665 14:58:35 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58649 00:05:34.665 14:58:35 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 58649 ']' 00:05:34.665 14:58:35 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.665 14:58:35 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.665 14:58:35 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.665 14:58:35 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.665 14:58:35 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.923 [2024-11-20 14:58:35.530518] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:05:34.923 [2024-11-20 14:58:35.530673] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58649 ] 00:05:34.923 [2024-11-20 14:58:35.717962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.180 [2024-11-20 14:58:35.872846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.146 14:58:36 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.146 14:58:36 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:36.146 14:58:36 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:36.715 14:58:37 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58649 00:05:36.715 14:58:37 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 58649 ']' 00:05:36.715 14:58:37 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 58649 00:05:36.715 14:58:37 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:36.715 14:58:37 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:36.715 14:58:37 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58649 00:05:36.715 14:58:37 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:36.715 killing process with pid 58649 00:05:36.715 14:58:37 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:36.715 14:58:37 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58649' 00:05:36.715 14:58:37 alias_rpc -- common/autotest_common.sh@973 -- # kill 58649 00:05:36.715 14:58:37 alias_rpc -- common/autotest_common.sh@978 -- # wait 58649 00:05:39.249 00:05:39.249 real 0m4.857s 00:05:39.249 user 0m4.743s 00:05:39.249 sys 0m0.803s 00:05:39.249 14:58:40 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.249 14:58:40 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.249 ************************************ 00:05:39.249 END TEST alias_rpc 00:05:39.249 ************************************ 00:05:39.249 14:58:40 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:39.249 14:58:40 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:39.249 14:58:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.249 14:58:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.249 14:58:40 -- common/autotest_common.sh@10 -- # set +x 00:05:39.508 ************************************ 00:05:39.508 START TEST spdkcli_tcp 00:05:39.508 ************************************ 00:05:39.508 14:58:40 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:39.508 * Looking for test storage... 00:05:39.508 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:39.508 14:58:40 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:39.508 14:58:40 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:39.508 14:58:40 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:39.508 14:58:40 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:39.508 14:58:40 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:39.508 14:58:40 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:39.508 14:58:40 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:39.508 14:58:40 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.508 14:58:40 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:39.508 14:58:40 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:39.508 14:58:40 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:39.508 14:58:40 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:39.508 14:58:40 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:39.508 14:58:40 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:39.508 14:58:40 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:39.508 14:58:40 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:39.508 14:58:40 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:39.508 14:58:40 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:39.508 14:58:40 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.508 14:58:40 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:39.508 14:58:40 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:39.508 14:58:40 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.508 14:58:40 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:39.508 14:58:40 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:39.508 14:58:40 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:39.508 14:58:40 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:39.508 14:58:40 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.508 14:58:40 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:39.508 14:58:40 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:39.508 14:58:40 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:39.508 14:58:40 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:39.508 14:58:40 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:39.508 14:58:40 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.508 14:58:40 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:39.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.508 --rc genhtml_branch_coverage=1 00:05:39.508 --rc genhtml_function_coverage=1 00:05:39.508 --rc genhtml_legend=1 00:05:39.508 --rc geninfo_all_blocks=1 00:05:39.508 --rc geninfo_unexecuted_blocks=1 00:05:39.508 00:05:39.508 ' 00:05:39.508 14:58:40 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:39.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.508 --rc genhtml_branch_coverage=1 00:05:39.508 --rc genhtml_function_coverage=1 00:05:39.508 --rc genhtml_legend=1 00:05:39.508 --rc geninfo_all_blocks=1 00:05:39.508 --rc geninfo_unexecuted_blocks=1 00:05:39.508 00:05:39.508 ' 00:05:39.508 14:58:40 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:39.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.508 --rc genhtml_branch_coverage=1 00:05:39.508 --rc genhtml_function_coverage=1 00:05:39.509 --rc genhtml_legend=1 00:05:39.509 --rc geninfo_all_blocks=1 00:05:39.509 --rc geninfo_unexecuted_blocks=1 00:05:39.509 00:05:39.509 ' 00:05:39.509 14:58:40 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:39.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.509 --rc genhtml_branch_coverage=1 00:05:39.509 --rc genhtml_function_coverage=1 00:05:39.509 --rc genhtml_legend=1 00:05:39.509 --rc geninfo_all_blocks=1 00:05:39.509 --rc geninfo_unexecuted_blocks=1 00:05:39.509 00:05:39.509 ' 00:05:39.509 14:58:40 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:39.509 14:58:40 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:39.509 14:58:40 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:39.509 14:58:40 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:39.509 14:58:40 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:39.509 14:58:40 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:39.509 14:58:40 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:39.509 14:58:40 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:39.509 14:58:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:39.509 14:58:40 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58767 00:05:39.509 14:58:40 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58767 00:05:39.509 14:58:40 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58767 ']' 00:05:39.509 14:58:40 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.509 14:58:40 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:39.509 14:58:40 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.509 14:58:40 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:39.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.509 14:58:40 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:39.509 14:58:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:39.768 [2024-11-20 14:58:40.446046] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:05:39.768 [2024-11-20 14:58:40.446231] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58767 ] 00:05:40.027 [2024-11-20 14:58:40.620451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:40.027 [2024-11-20 14:58:40.781597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.027 [2024-11-20 14:58:40.781633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.404 14:58:41 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:41.404 14:58:41 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:41.404 14:58:41 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58790 00:05:41.404 14:58:41 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:41.404 14:58:41 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:41.404 [ 00:05:41.404 "bdev_malloc_delete", 00:05:41.404 "bdev_malloc_create", 00:05:41.404 "bdev_null_resize", 00:05:41.404 "bdev_null_delete", 00:05:41.404 "bdev_null_create", 00:05:41.404 "bdev_nvme_cuse_unregister", 00:05:41.404 "bdev_nvme_cuse_register", 00:05:41.404 "bdev_opal_new_user", 00:05:41.404 "bdev_opal_set_lock_state", 00:05:41.404 "bdev_opal_delete", 00:05:41.404 "bdev_opal_get_info", 00:05:41.404 "bdev_opal_create", 00:05:41.404 "bdev_nvme_opal_revert", 00:05:41.404 "bdev_nvme_opal_init", 00:05:41.404 "bdev_nvme_send_cmd", 00:05:41.404 "bdev_nvme_set_keys", 00:05:41.404 "bdev_nvme_get_path_iostat", 00:05:41.404 "bdev_nvme_get_mdns_discovery_info", 00:05:41.404 "bdev_nvme_stop_mdns_discovery", 00:05:41.404 "bdev_nvme_start_mdns_discovery", 00:05:41.404 "bdev_nvme_set_multipath_policy", 00:05:41.404 "bdev_nvme_set_preferred_path", 00:05:41.404 "bdev_nvme_get_io_paths", 00:05:41.404 "bdev_nvme_remove_error_injection", 00:05:41.404 "bdev_nvme_add_error_injection", 00:05:41.404 "bdev_nvme_get_discovery_info", 00:05:41.404 "bdev_nvme_stop_discovery", 00:05:41.404 "bdev_nvme_start_discovery", 00:05:41.404 "bdev_nvme_get_controller_health_info", 00:05:41.404 "bdev_nvme_disable_controller", 00:05:41.404 "bdev_nvme_enable_controller", 00:05:41.404 "bdev_nvme_reset_controller", 00:05:41.404 "bdev_nvme_get_transport_statistics", 00:05:41.404 "bdev_nvme_apply_firmware", 00:05:41.404 "bdev_nvme_detach_controller", 00:05:41.404 "bdev_nvme_get_controllers", 00:05:41.404 "bdev_nvme_attach_controller", 00:05:41.404 "bdev_nvme_set_hotplug", 00:05:41.404 "bdev_nvme_set_options", 00:05:41.404 "bdev_passthru_delete", 00:05:41.404 "bdev_passthru_create", 00:05:41.404 "bdev_lvol_set_parent_bdev", 00:05:41.404 "bdev_lvol_set_parent", 00:05:41.404 "bdev_lvol_check_shallow_copy", 00:05:41.404 "bdev_lvol_start_shallow_copy", 00:05:41.404 "bdev_lvol_grow_lvstore", 00:05:41.404 "bdev_lvol_get_lvols", 00:05:41.404 "bdev_lvol_get_lvstores", 00:05:41.404 "bdev_lvol_delete", 00:05:41.404 "bdev_lvol_set_read_only", 00:05:41.404 "bdev_lvol_resize", 00:05:41.404 "bdev_lvol_decouple_parent", 00:05:41.404 "bdev_lvol_inflate", 00:05:41.404 "bdev_lvol_rename", 00:05:41.404 "bdev_lvol_clone_bdev", 00:05:41.404 "bdev_lvol_clone", 00:05:41.404 "bdev_lvol_snapshot", 00:05:41.404 "bdev_lvol_create", 00:05:41.404 "bdev_lvol_delete_lvstore", 00:05:41.404 "bdev_lvol_rename_lvstore", 00:05:41.404 "bdev_lvol_create_lvstore", 00:05:41.404 "bdev_raid_set_options", 00:05:41.404 "bdev_raid_remove_base_bdev", 00:05:41.404 "bdev_raid_add_base_bdev", 00:05:41.404 "bdev_raid_delete", 00:05:41.404 "bdev_raid_create", 00:05:41.404 "bdev_raid_get_bdevs", 00:05:41.404 "bdev_error_inject_error", 00:05:41.404 "bdev_error_delete", 00:05:41.404 "bdev_error_create", 00:05:41.404 "bdev_split_delete", 00:05:41.404 "bdev_split_create", 00:05:41.404 "bdev_delay_delete", 00:05:41.404 "bdev_delay_create", 00:05:41.404 "bdev_delay_update_latency", 00:05:41.404 "bdev_zone_block_delete", 00:05:41.404 "bdev_zone_block_create", 00:05:41.404 "blobfs_create", 00:05:41.404 "blobfs_detect", 00:05:41.404 "blobfs_set_cache_size", 00:05:41.404 "bdev_xnvme_delete", 00:05:41.404 "bdev_xnvme_create", 00:05:41.404 "bdev_aio_delete", 00:05:41.404 "bdev_aio_rescan", 00:05:41.405 "bdev_aio_create", 00:05:41.405 "bdev_ftl_set_property", 00:05:41.405 "bdev_ftl_get_properties", 00:05:41.405 "bdev_ftl_get_stats", 00:05:41.405 "bdev_ftl_unmap", 00:05:41.405 "bdev_ftl_unload", 00:05:41.405 "bdev_ftl_delete", 00:05:41.405 "bdev_ftl_load", 00:05:41.405 "bdev_ftl_create", 00:05:41.405 "bdev_virtio_attach_controller", 00:05:41.405 "bdev_virtio_scsi_get_devices", 00:05:41.405 "bdev_virtio_detach_controller", 00:05:41.405 "bdev_virtio_blk_set_hotplug", 00:05:41.405 "bdev_iscsi_delete", 00:05:41.405 "bdev_iscsi_create", 00:05:41.405 "bdev_iscsi_set_options", 00:05:41.405 "accel_error_inject_error", 00:05:41.405 "ioat_scan_accel_module", 00:05:41.405 "dsa_scan_accel_module", 00:05:41.405 "iaa_scan_accel_module", 00:05:41.405 "keyring_file_remove_key", 00:05:41.405 "keyring_file_add_key", 00:05:41.405 "keyring_linux_set_options", 00:05:41.405 "fsdev_aio_delete", 00:05:41.405 "fsdev_aio_create", 00:05:41.405 "iscsi_get_histogram", 00:05:41.405 "iscsi_enable_histogram", 00:05:41.405 "iscsi_set_options", 00:05:41.405 "iscsi_get_auth_groups", 00:05:41.405 "iscsi_auth_group_remove_secret", 00:05:41.405 "iscsi_auth_group_add_secret", 00:05:41.405 "iscsi_delete_auth_group", 00:05:41.405 "iscsi_create_auth_group", 00:05:41.405 "iscsi_set_discovery_auth", 00:05:41.405 "iscsi_get_options", 00:05:41.405 "iscsi_target_node_request_logout", 00:05:41.405 "iscsi_target_node_set_redirect", 00:05:41.405 "iscsi_target_node_set_auth", 00:05:41.405 "iscsi_target_node_add_lun", 00:05:41.405 "iscsi_get_stats", 00:05:41.405 "iscsi_get_connections", 00:05:41.405 "iscsi_portal_group_set_auth", 00:05:41.405 "iscsi_start_portal_group", 00:05:41.405 "iscsi_delete_portal_group", 00:05:41.405 "iscsi_create_portal_group", 00:05:41.405 "iscsi_get_portal_groups", 00:05:41.405 "iscsi_delete_target_node", 00:05:41.405 "iscsi_target_node_remove_pg_ig_maps", 00:05:41.405 "iscsi_target_node_add_pg_ig_maps", 00:05:41.405 "iscsi_create_target_node", 00:05:41.405 "iscsi_get_target_nodes", 00:05:41.405 "iscsi_delete_initiator_group", 00:05:41.405 "iscsi_initiator_group_remove_initiators", 00:05:41.405 "iscsi_initiator_group_add_initiators", 00:05:41.405 "iscsi_create_initiator_group", 00:05:41.405 "iscsi_get_initiator_groups", 00:05:41.405 "nvmf_set_crdt", 00:05:41.405 "nvmf_set_config", 00:05:41.405 "nvmf_set_max_subsystems", 00:05:41.405 "nvmf_stop_mdns_prr", 00:05:41.405 "nvmf_publish_mdns_prr", 00:05:41.405 "nvmf_subsystem_get_listeners", 00:05:41.405 "nvmf_subsystem_get_qpairs", 00:05:41.405 "nvmf_subsystem_get_controllers", 00:05:41.405 "nvmf_get_stats", 00:05:41.405 "nvmf_get_transports", 00:05:41.405 "nvmf_create_transport", 00:05:41.405 "nvmf_get_targets", 00:05:41.405 "nvmf_delete_target", 00:05:41.405 "nvmf_create_target", 00:05:41.405 "nvmf_subsystem_allow_any_host", 00:05:41.405 "nvmf_subsystem_set_keys", 00:05:41.405 "nvmf_subsystem_remove_host", 00:05:41.405 "nvmf_subsystem_add_host", 00:05:41.405 "nvmf_ns_remove_host", 00:05:41.405 "nvmf_ns_add_host", 00:05:41.405 "nvmf_subsystem_remove_ns", 00:05:41.405 "nvmf_subsystem_set_ns_ana_group", 00:05:41.405 "nvmf_subsystem_add_ns", 00:05:41.405 "nvmf_subsystem_listener_set_ana_state", 00:05:41.405 "nvmf_discovery_get_referrals", 00:05:41.405 "nvmf_discovery_remove_referral", 00:05:41.405 "nvmf_discovery_add_referral", 00:05:41.405 "nvmf_subsystem_remove_listener", 00:05:41.405 "nvmf_subsystem_add_listener", 00:05:41.405 "nvmf_delete_subsystem", 00:05:41.405 "nvmf_create_subsystem", 00:05:41.405 "nvmf_get_subsystems", 00:05:41.405 "env_dpdk_get_mem_stats", 00:05:41.405 "nbd_get_disks", 00:05:41.405 "nbd_stop_disk", 00:05:41.405 "nbd_start_disk", 00:05:41.405 "ublk_recover_disk", 00:05:41.405 "ublk_get_disks", 00:05:41.405 "ublk_stop_disk", 00:05:41.405 "ublk_start_disk", 00:05:41.405 "ublk_destroy_target", 00:05:41.405 "ublk_create_target", 00:05:41.405 "virtio_blk_create_transport", 00:05:41.405 "virtio_blk_get_transports", 00:05:41.405 "vhost_controller_set_coalescing", 00:05:41.405 "vhost_get_controllers", 00:05:41.405 "vhost_delete_controller", 00:05:41.405 "vhost_create_blk_controller", 00:05:41.405 "vhost_scsi_controller_remove_target", 00:05:41.405 "vhost_scsi_controller_add_target", 00:05:41.405 "vhost_start_scsi_controller", 00:05:41.405 "vhost_create_scsi_controller", 00:05:41.405 "thread_set_cpumask", 00:05:41.405 "scheduler_set_options", 00:05:41.405 "framework_get_governor", 00:05:41.405 "framework_get_scheduler", 00:05:41.405 "framework_set_scheduler", 00:05:41.405 "framework_get_reactors", 00:05:41.405 "thread_get_io_channels", 00:05:41.405 "thread_get_pollers", 00:05:41.405 "thread_get_stats", 00:05:41.405 "framework_monitor_context_switch", 00:05:41.405 "spdk_kill_instance", 00:05:41.405 "log_enable_timestamps", 00:05:41.405 "log_get_flags", 00:05:41.405 "log_clear_flag", 00:05:41.405 "log_set_flag", 00:05:41.405 "log_get_level", 00:05:41.405 "log_set_level", 00:05:41.405 "log_get_print_level", 00:05:41.405 "log_set_print_level", 00:05:41.405 "framework_enable_cpumask_locks", 00:05:41.405 "framework_disable_cpumask_locks", 00:05:41.405 "framework_wait_init", 00:05:41.405 "framework_start_init", 00:05:41.405 "scsi_get_devices", 00:05:41.405 "bdev_get_histogram", 00:05:41.405 "bdev_enable_histogram", 00:05:41.405 "bdev_set_qos_limit", 00:05:41.405 "bdev_set_qd_sampling_period", 00:05:41.405 "bdev_get_bdevs", 00:05:41.405 "bdev_reset_iostat", 00:05:41.405 "bdev_get_iostat", 00:05:41.405 "bdev_examine", 00:05:41.405 "bdev_wait_for_examine", 00:05:41.405 "bdev_set_options", 00:05:41.405 "accel_get_stats", 00:05:41.405 "accel_set_options", 00:05:41.405 "accel_set_driver", 00:05:41.405 "accel_crypto_key_destroy", 00:05:41.405 "accel_crypto_keys_get", 00:05:41.405 "accel_crypto_key_create", 00:05:41.405 "accel_assign_opc", 00:05:41.405 "accel_get_module_info", 00:05:41.405 "accel_get_opc_assignments", 00:05:41.405 "vmd_rescan", 00:05:41.405 "vmd_remove_device", 00:05:41.405 "vmd_enable", 00:05:41.405 "sock_get_default_impl", 00:05:41.405 "sock_set_default_impl", 00:05:41.405 "sock_impl_set_options", 00:05:41.405 "sock_impl_get_options", 00:05:41.405 "iobuf_get_stats", 00:05:41.405 "iobuf_set_options", 00:05:41.405 "keyring_get_keys", 00:05:41.405 "framework_get_pci_devices", 00:05:41.405 "framework_get_config", 00:05:41.405 "framework_get_subsystems", 00:05:41.405 "fsdev_set_opts", 00:05:41.405 "fsdev_get_opts", 00:05:41.405 "trace_get_info", 00:05:41.405 "trace_get_tpoint_group_mask", 00:05:41.405 "trace_disable_tpoint_group", 00:05:41.405 "trace_enable_tpoint_group", 00:05:41.405 "trace_clear_tpoint_mask", 00:05:41.405 "trace_set_tpoint_mask", 00:05:41.405 "notify_get_notifications", 00:05:41.405 "notify_get_types", 00:05:41.405 "spdk_get_version", 00:05:41.405 "rpc_get_methods" 00:05:41.405 ] 00:05:41.405 14:58:42 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:41.405 14:58:42 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:41.405 14:58:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:41.664 14:58:42 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:41.664 14:58:42 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58767 00:05:41.664 14:58:42 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58767 ']' 00:05:41.664 14:58:42 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58767 00:05:41.664 14:58:42 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:41.664 14:58:42 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:41.664 14:58:42 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58767 00:05:41.664 14:58:42 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:41.664 killing process with pid 58767 00:05:41.664 14:58:42 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:41.664 14:58:42 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58767' 00:05:41.664 14:58:42 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58767 00:05:41.664 14:58:42 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58767 00:05:44.949 ************************************ 00:05:44.949 END TEST spdkcli_tcp 00:05:44.949 ************************************ 00:05:44.949 00:05:44.949 real 0m5.014s 00:05:44.949 user 0m8.895s 00:05:44.949 sys 0m0.944s 00:05:44.949 14:58:45 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.949 14:58:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:44.949 14:58:45 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:44.949 14:58:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.949 14:58:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.949 14:58:45 -- common/autotest_common.sh@10 -- # set +x 00:05:44.949 ************************************ 00:05:44.949 START TEST dpdk_mem_utility 00:05:44.949 ************************************ 00:05:44.949 14:58:45 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:44.949 * Looking for test storage... 00:05:44.949 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:44.949 14:58:45 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:44.949 14:58:45 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:44.949 14:58:45 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:44.949 14:58:45 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:44.949 14:58:45 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.949 14:58:45 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.949 14:58:45 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.949 14:58:45 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.949 14:58:45 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.949 14:58:45 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.949 14:58:45 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.949 14:58:45 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.949 14:58:45 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.949 14:58:45 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.949 14:58:45 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.949 14:58:45 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:44.949 14:58:45 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:44.949 14:58:45 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.949 14:58:45 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.949 14:58:45 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:44.949 14:58:45 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:44.949 14:58:45 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.949 14:58:45 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:44.949 14:58:45 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.949 14:58:45 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:44.949 14:58:45 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:44.949 14:58:45 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.949 14:58:45 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:44.949 14:58:45 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.949 14:58:45 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.949 14:58:45 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.949 14:58:45 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:44.949 14:58:45 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.949 14:58:45 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:44.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.949 --rc genhtml_branch_coverage=1 00:05:44.949 --rc genhtml_function_coverage=1 00:05:44.949 --rc genhtml_legend=1 00:05:44.949 --rc geninfo_all_blocks=1 00:05:44.949 --rc geninfo_unexecuted_blocks=1 00:05:44.949 00:05:44.949 ' 00:05:44.949 14:58:45 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:44.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.949 --rc genhtml_branch_coverage=1 00:05:44.949 --rc genhtml_function_coverage=1 00:05:44.949 --rc genhtml_legend=1 00:05:44.949 --rc geninfo_all_blocks=1 00:05:44.949 --rc geninfo_unexecuted_blocks=1 00:05:44.949 00:05:44.949 ' 00:05:44.949 14:58:45 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:44.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.949 --rc genhtml_branch_coverage=1 00:05:44.949 --rc genhtml_function_coverage=1 00:05:44.949 --rc genhtml_legend=1 00:05:44.949 --rc geninfo_all_blocks=1 00:05:44.949 --rc geninfo_unexecuted_blocks=1 00:05:44.949 00:05:44.949 ' 00:05:44.949 14:58:45 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:44.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.949 --rc genhtml_branch_coverage=1 00:05:44.949 --rc genhtml_function_coverage=1 00:05:44.949 --rc genhtml_legend=1 00:05:44.949 --rc geninfo_all_blocks=1 00:05:44.949 --rc geninfo_unexecuted_blocks=1 00:05:44.949 00:05:44.949 ' 00:05:44.949 14:58:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:44.949 14:58:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58895 00:05:44.949 14:58:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:44.949 14:58:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58895 00:05:44.949 14:58:45 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58895 ']' 00:05:44.949 14:58:45 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.949 14:58:45 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.949 14:58:45 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.949 14:58:45 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.949 14:58:45 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:44.949 [2024-11-20 14:58:45.573517] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:05:44.949 [2024-11-20 14:58:45.573932] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58895 ] 00:05:44.949 [2024-11-20 14:58:45.766528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.207 [2024-11-20 14:58:45.920477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.605 14:58:47 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.605 14:58:47 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:46.605 14:58:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:46.605 14:58:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:46.605 14:58:47 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.605 14:58:47 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:46.605 { 00:05:46.605 "filename": "/tmp/spdk_mem_dump.txt" 00:05:46.605 } 00:05:46.605 14:58:47 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.605 14:58:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:46.605 DPDK memory size 824.000000 MiB in 1 heap(s) 00:05:46.605 1 heaps totaling size 824.000000 MiB 00:05:46.605 size: 824.000000 MiB heap id: 0 00:05:46.605 end heaps---------- 00:05:46.605 9 mempools totaling size 603.782043 MiB 00:05:46.605 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:46.605 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:46.605 size: 100.555481 MiB name: bdev_io_58895 00:05:46.605 size: 50.003479 MiB name: msgpool_58895 00:05:46.605 size: 36.509338 MiB name: fsdev_io_58895 00:05:46.605 size: 21.763794 MiB name: PDU_Pool 00:05:46.605 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:46.605 size: 4.133484 MiB name: evtpool_58895 00:05:46.605 size: 0.026123 MiB name: Session_Pool 00:05:46.605 end mempools------- 00:05:46.605 6 memzones totaling size 4.142822 MiB 00:05:46.605 size: 1.000366 MiB name: RG_ring_0_58895 00:05:46.605 size: 1.000366 MiB name: RG_ring_1_58895 00:05:46.605 size: 1.000366 MiB name: RG_ring_4_58895 00:05:46.605 size: 1.000366 MiB name: RG_ring_5_58895 00:05:46.605 size: 0.125366 MiB name: RG_ring_2_58895 00:05:46.605 size: 0.015991 MiB name: RG_ring_3_58895 00:05:46.605 end memzones------- 00:05:46.605 14:58:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:46.605 heap id: 0 total size: 824.000000 MiB number of busy elements: 323 number of free elements: 18 00:05:46.605 list of free elements. size: 16.779419 MiB 00:05:46.606 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:46.606 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:46.606 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:46.606 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:46.606 element at address: 0x200019900040 with size: 0.999939 MiB 00:05:46.606 element at address: 0x200019a00000 with size: 0.999084 MiB 00:05:46.606 element at address: 0x200032600000 with size: 0.994324 MiB 00:05:46.606 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:46.606 element at address: 0x200019200000 with size: 0.959656 MiB 00:05:46.606 element at address: 0x200019d00040 with size: 0.936401 MiB 00:05:46.606 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:46.606 element at address: 0x20001b400000 with size: 0.560730 MiB 00:05:46.606 element at address: 0x200000c00000 with size: 0.489197 MiB 00:05:46.606 element at address: 0x200019600000 with size: 0.487976 MiB 00:05:46.606 element at address: 0x200019e00000 with size: 0.485413 MiB 00:05:46.606 element at address: 0x200012c00000 with size: 0.433472 MiB 00:05:46.606 element at address: 0x200028800000 with size: 0.390442 MiB 00:05:46.606 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:46.606 list of standard malloc elements. size: 199.289673 MiB 00:05:46.606 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:46.606 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:46.606 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:46.606 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:46.606 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:05:46.606 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:46.606 element at address: 0x200019deff40 with size: 0.062683 MiB 00:05:46.606 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:46.606 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:46.606 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:05:46.606 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:46.606 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:46.606 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:46.606 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:46.606 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:46.606 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:46.606 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:46.606 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:46.606 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:46.606 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:46.606 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:46.606 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:46.606 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:46.606 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:46.606 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:46.606 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:46.606 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:46.606 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:46.606 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:46.606 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:46.606 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:46.606 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:46.606 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:46.606 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:46.606 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:46.606 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:46.606 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:46.606 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:46.606 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:46.606 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:46.606 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:46.606 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:46.606 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:46.606 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:46.606 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:46.606 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:46.606 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:46.606 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:46.606 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:46.606 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:46.606 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:46.606 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:46.606 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:46.606 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:46.606 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:46.606 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:46.606 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:46.606 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:46.606 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:46.606 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:46.606 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:46.606 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:46.606 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:46.606 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:46.606 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:46.606 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:05:46.606 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:05:46.606 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:05:46.606 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:05:46.606 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:46.606 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:46.606 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:46.606 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:46.606 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:46.606 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:46.606 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:46.606 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:46.606 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:46.606 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:46.606 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:46.606 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:46.606 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:46.606 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:46.606 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:46.606 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:46.606 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:46.606 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:46.606 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:46.606 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:46.606 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:46.606 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:46.606 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:46.606 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:46.606 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:46.606 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:46.606 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:46.606 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:46.606 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:46.606 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:46.606 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:46.606 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:46.606 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:46.606 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:46.606 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:46.606 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:46.606 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:46.606 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:46.606 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:46.606 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:46.606 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:46.606 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:46.606 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:46.606 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:46.606 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:46.606 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:46.606 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:46.606 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:46.606 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:46.606 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:46.606 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:05:46.606 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:05:46.606 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:05:46.606 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:05:46.606 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:05:46.606 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:05:46.606 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:05:46.606 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:05:46.606 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:05:46.606 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:05:46.606 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:05:46.606 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:05:46.607 element at address: 0x200019affc40 with size: 0.000244 MiB 00:05:46.607 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b48f8c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b48f9c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:05:46.607 element at address: 0x200028863f40 with size: 0.000244 MiB 00:05:46.607 element at address: 0x200028864040 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20002886af80 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20002886b080 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20002886b180 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20002886b280 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20002886b380 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20002886b480 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20002886b580 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20002886b680 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20002886b780 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20002886b880 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20002886b980 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20002886be80 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20002886c080 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20002886c180 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20002886c280 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20002886c380 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20002886c480 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20002886c580 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20002886c680 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20002886c780 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20002886c880 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20002886c980 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20002886d080 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20002886d180 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20002886d280 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20002886d380 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20002886d480 with size: 0.000244 MiB 00:05:46.607 element at address: 0x20002886d580 with size: 0.000244 MiB 00:05:46.608 element at address: 0x20002886d680 with size: 0.000244 MiB 00:05:46.608 element at address: 0x20002886d780 with size: 0.000244 MiB 00:05:46.608 element at address: 0x20002886d880 with size: 0.000244 MiB 00:05:46.608 element at address: 0x20002886d980 with size: 0.000244 MiB 00:05:46.608 element at address: 0x20002886da80 with size: 0.000244 MiB 00:05:46.608 element at address: 0x20002886db80 with size: 0.000244 MiB 00:05:46.608 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:05:46.608 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:05:46.608 element at address: 0x20002886de80 with size: 0.000244 MiB 00:05:46.608 element at address: 0x20002886df80 with size: 0.000244 MiB 00:05:46.608 element at address: 0x20002886e080 with size: 0.000244 MiB 00:05:46.608 element at address: 0x20002886e180 with size: 0.000244 MiB 00:05:46.608 element at address: 0x20002886e280 with size: 0.000244 MiB 00:05:46.608 element at address: 0x20002886e380 with size: 0.000244 MiB 00:05:46.608 element at address: 0x20002886e480 with size: 0.000244 MiB 00:05:46.608 element at address: 0x20002886e580 with size: 0.000244 MiB 00:05:46.608 element at address: 0x20002886e680 with size: 0.000244 MiB 00:05:46.608 element at address: 0x20002886e780 with size: 0.000244 MiB 00:05:46.608 element at address: 0x20002886e880 with size: 0.000244 MiB 00:05:46.608 element at address: 0x20002886e980 with size: 0.000244 MiB 00:05:46.608 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:05:46.608 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:05:46.608 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:05:46.608 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:05:46.608 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:05:46.608 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:05:46.608 element at address: 0x20002886f080 with size: 0.000244 MiB 00:05:46.608 element at address: 0x20002886f180 with size: 0.000244 MiB 00:05:46.608 element at address: 0x20002886f280 with size: 0.000244 MiB 00:05:46.608 element at address: 0x20002886f380 with size: 0.000244 MiB 00:05:46.608 element at address: 0x20002886f480 with size: 0.000244 MiB 00:05:46.608 element at address: 0x20002886f580 with size: 0.000244 MiB 00:05:46.608 element at address: 0x20002886f680 with size: 0.000244 MiB 00:05:46.608 element at address: 0x20002886f780 with size: 0.000244 MiB 00:05:46.608 element at address: 0x20002886f880 with size: 0.000244 MiB 00:05:46.608 element at address: 0x20002886f980 with size: 0.000244 MiB 00:05:46.608 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:05:46.608 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:05:46.608 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:05:46.608 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:05:46.608 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:05:46.608 list of memzone associated elements. size: 607.930908 MiB 00:05:46.608 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:05:46.608 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:46.608 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:05:46.608 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:46.608 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:05:46.608 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58895_0 00:05:46.608 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:46.608 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58895_0 00:05:46.608 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:46.608 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58895_0 00:05:46.608 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:05:46.608 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:46.608 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:05:46.608 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:46.608 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:46.608 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58895_0 00:05:46.608 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:46.608 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58895 00:05:46.608 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:46.608 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58895 00:05:46.608 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:05:46.608 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:46.608 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:05:46.608 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:46.608 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:46.608 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:46.608 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:05:46.608 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:46.608 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:46.608 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58895 00:05:46.608 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:46.608 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58895 00:05:46.608 element at address: 0x200019affd40 with size: 1.000549 MiB 00:05:46.608 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58895 00:05:46.608 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:05:46.608 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58895 00:05:46.608 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:46.608 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58895 00:05:46.608 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:46.608 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58895 00:05:46.608 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:05:46.608 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:46.608 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:05:46.608 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:46.608 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:05:46.608 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:46.608 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:46.608 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58895 00:05:46.608 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:46.608 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58895 00:05:46.608 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:05:46.608 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:46.608 element at address: 0x200028864140 with size: 0.023804 MiB 00:05:46.608 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:46.608 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:46.608 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58895 00:05:46.608 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:05:46.608 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:46.608 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:46.608 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58895 00:05:46.608 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:46.608 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58895 00:05:46.608 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:46.608 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58895 00:05:46.608 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:05:46.608 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:46.608 14:58:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:46.608 14:58:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58895 00:05:46.608 14:58:47 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58895 ']' 00:05:46.608 14:58:47 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58895 00:05:46.608 14:58:47 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:46.608 14:58:47 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:46.608 14:58:47 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58895 00:05:46.608 14:58:47 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:46.608 14:58:47 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:46.608 killing process with pid 58895 00:05:46.608 14:58:47 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58895' 00:05:46.608 14:58:47 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58895 00:05:46.608 14:58:47 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58895 00:05:49.149 00:05:49.149 real 0m4.730s 00:05:49.149 user 0m4.404s 00:05:49.149 sys 0m0.883s 00:05:49.149 ************************************ 00:05:49.149 END TEST dpdk_mem_utility 00:05:49.149 ************************************ 00:05:49.149 14:58:49 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.149 14:58:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:49.149 14:58:49 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:49.149 14:58:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.149 14:58:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.149 14:58:49 -- common/autotest_common.sh@10 -- # set +x 00:05:49.149 ************************************ 00:05:49.149 START TEST event 00:05:49.149 ************************************ 00:05:49.149 14:58:49 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:49.408 * Looking for test storage... 00:05:49.408 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:49.408 14:58:50 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:49.408 14:58:50 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:49.408 14:58:50 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:49.408 14:58:50 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:49.408 14:58:50 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.408 14:58:50 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.408 14:58:50 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.408 14:58:50 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.408 14:58:50 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.408 14:58:50 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.408 14:58:50 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.408 14:58:50 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.408 14:58:50 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.408 14:58:50 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.408 14:58:50 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.408 14:58:50 event -- scripts/common.sh@344 -- # case "$op" in 00:05:49.408 14:58:50 event -- scripts/common.sh@345 -- # : 1 00:05:49.408 14:58:50 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.408 14:58:50 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.408 14:58:50 event -- scripts/common.sh@365 -- # decimal 1 00:05:49.408 14:58:50 event -- scripts/common.sh@353 -- # local d=1 00:05:49.408 14:58:50 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.408 14:58:50 event -- scripts/common.sh@355 -- # echo 1 00:05:49.408 14:58:50 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.408 14:58:50 event -- scripts/common.sh@366 -- # decimal 2 00:05:49.408 14:58:50 event -- scripts/common.sh@353 -- # local d=2 00:05:49.408 14:58:50 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.408 14:58:50 event -- scripts/common.sh@355 -- # echo 2 00:05:49.408 14:58:50 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.408 14:58:50 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.408 14:58:50 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.408 14:58:50 event -- scripts/common.sh@368 -- # return 0 00:05:49.408 14:58:50 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.408 14:58:50 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:49.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.408 --rc genhtml_branch_coverage=1 00:05:49.408 --rc genhtml_function_coverage=1 00:05:49.408 --rc genhtml_legend=1 00:05:49.408 --rc geninfo_all_blocks=1 00:05:49.408 --rc geninfo_unexecuted_blocks=1 00:05:49.408 00:05:49.408 ' 00:05:49.408 14:58:50 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:49.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.408 --rc genhtml_branch_coverage=1 00:05:49.408 --rc genhtml_function_coverage=1 00:05:49.408 --rc genhtml_legend=1 00:05:49.408 --rc geninfo_all_blocks=1 00:05:49.408 --rc geninfo_unexecuted_blocks=1 00:05:49.408 00:05:49.408 ' 00:05:49.408 14:58:50 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:49.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.409 --rc genhtml_branch_coverage=1 00:05:49.409 --rc genhtml_function_coverage=1 00:05:49.409 --rc genhtml_legend=1 00:05:49.409 --rc geninfo_all_blocks=1 00:05:49.409 --rc geninfo_unexecuted_blocks=1 00:05:49.409 00:05:49.409 ' 00:05:49.409 14:58:50 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:49.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.409 --rc genhtml_branch_coverage=1 00:05:49.409 --rc genhtml_function_coverage=1 00:05:49.409 --rc genhtml_legend=1 00:05:49.409 --rc geninfo_all_blocks=1 00:05:49.409 --rc geninfo_unexecuted_blocks=1 00:05:49.409 00:05:49.409 ' 00:05:49.409 14:58:50 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:49.409 14:58:50 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:49.409 14:58:50 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:49.409 14:58:50 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:49.409 14:58:50 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.409 14:58:50 event -- common/autotest_common.sh@10 -- # set +x 00:05:49.409 ************************************ 00:05:49.409 START TEST event_perf 00:05:49.409 ************************************ 00:05:49.409 14:58:50 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:49.667 Running I/O for 1 seconds...[2024-11-20 14:58:50.244236] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:05:49.667 [2024-11-20 14:58:50.244377] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59014 ] 00:05:49.667 [2024-11-20 14:58:50.433394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:49.926 [2024-11-20 14:58:50.593794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.926 [2024-11-20 14:58:50.593957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:49.926 [2024-11-20 14:58:50.593989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.926 [2024-11-20 14:58:50.594006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:51.368 Running I/O for 1 seconds... 00:05:51.368 lcore 0: 89553 00:05:51.368 lcore 1: 89555 00:05:51.368 lcore 2: 89553 00:05:51.368 lcore 3: 89555 00:05:51.368 done. 00:05:51.368 00:05:51.368 real 0m1.690s 00:05:51.368 user 0m4.393s 00:05:51.368 sys 0m0.164s 00:05:51.368 ************************************ 00:05:51.368 END TEST event_perf 00:05:51.368 ************************************ 00:05:51.368 14:58:51 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.368 14:58:51 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:51.368 14:58:51 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:51.368 14:58:51 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:51.368 14:58:51 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.368 14:58:51 event -- common/autotest_common.sh@10 -- # set +x 00:05:51.368 ************************************ 00:05:51.368 START TEST event_reactor 00:05:51.368 ************************************ 00:05:51.368 14:58:51 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:51.368 [2024-11-20 14:58:51.995439] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:05:51.368 [2024-11-20 14:58:51.995585] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59059 ] 00:05:51.368 [2024-11-20 14:58:52.182700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.627 [2024-11-20 14:58:52.331466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.006 test_start 00:05:53.006 oneshot 00:05:53.006 tick 100 00:05:53.006 tick 100 00:05:53.006 tick 250 00:05:53.006 tick 100 00:05:53.006 tick 100 00:05:53.006 tick 250 00:05:53.006 tick 100 00:05:53.006 tick 500 00:05:53.006 tick 100 00:05:53.006 tick 100 00:05:53.006 tick 250 00:05:53.006 tick 100 00:05:53.006 tick 100 00:05:53.006 test_end 00:05:53.006 00:05:53.006 real 0m1.633s 00:05:53.006 user 0m1.401s 00:05:53.006 sys 0m0.123s 00:05:53.006 14:58:53 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.006 14:58:53 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:53.006 ************************************ 00:05:53.006 END TEST event_reactor 00:05:53.006 ************************************ 00:05:53.006 14:58:53 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:53.006 14:58:53 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:53.006 14:58:53 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.006 14:58:53 event -- common/autotest_common.sh@10 -- # set +x 00:05:53.006 ************************************ 00:05:53.006 START TEST event_reactor_perf 00:05:53.006 ************************************ 00:05:53.006 14:58:53 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:53.006 [2024-11-20 14:58:53.708513] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:05:53.006 [2024-11-20 14:58:53.708655] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59090 ] 00:05:53.265 [2024-11-20 14:58:53.897519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.265 [2024-11-20 14:58:54.038951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.641 test_start 00:05:54.641 test_end 00:05:54.641 Performance: 376084 events per second 00:05:54.641 00:05:54.641 real 0m1.627s 00:05:54.641 user 0m1.391s 00:05:54.641 sys 0m0.127s 00:05:54.641 14:58:55 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.641 14:58:55 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:54.641 ************************************ 00:05:54.641 END TEST event_reactor_perf 00:05:54.641 ************************************ 00:05:54.641 14:58:55 event -- event/event.sh@49 -- # uname -s 00:05:54.641 14:58:55 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:54.641 14:58:55 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:54.641 14:58:55 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.641 14:58:55 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.641 14:58:55 event -- common/autotest_common.sh@10 -- # set +x 00:05:54.641 ************************************ 00:05:54.641 START TEST event_scheduler 00:05:54.641 ************************************ 00:05:54.641 14:58:55 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:54.901 * Looking for test storage... 00:05:54.902 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:54.902 14:58:55 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:54.902 14:58:55 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:54.902 14:58:55 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:54.902 14:58:55 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:54.902 14:58:55 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:54.902 14:58:55 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:54.902 14:58:55 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:54.902 14:58:55 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.902 14:58:55 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:54.902 14:58:55 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:54.902 14:58:55 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:54.902 14:58:55 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:54.902 14:58:55 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:54.902 14:58:55 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:54.902 14:58:55 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:54.902 14:58:55 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:54.902 14:58:55 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:54.902 14:58:55 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:54.902 14:58:55 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.902 14:58:55 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:54.902 14:58:55 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:54.902 14:58:55 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.902 14:58:55 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:54.902 14:58:55 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:54.902 14:58:55 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:54.902 14:58:55 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:54.902 14:58:55 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.902 14:58:55 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:54.902 14:58:55 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:54.902 14:58:55 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:54.902 14:58:55 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:54.902 14:58:55 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:54.902 14:58:55 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.902 14:58:55 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:54.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.902 --rc genhtml_branch_coverage=1 00:05:54.902 --rc genhtml_function_coverage=1 00:05:54.902 --rc genhtml_legend=1 00:05:54.902 --rc geninfo_all_blocks=1 00:05:54.902 --rc geninfo_unexecuted_blocks=1 00:05:54.902 00:05:54.902 ' 00:05:54.902 14:58:55 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:54.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.902 --rc genhtml_branch_coverage=1 00:05:54.902 --rc genhtml_function_coverage=1 00:05:54.902 --rc genhtml_legend=1 00:05:54.902 --rc geninfo_all_blocks=1 00:05:54.902 --rc geninfo_unexecuted_blocks=1 00:05:54.902 00:05:54.902 ' 00:05:54.902 14:58:55 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:54.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.902 --rc genhtml_branch_coverage=1 00:05:54.902 --rc genhtml_function_coverage=1 00:05:54.902 --rc genhtml_legend=1 00:05:54.902 --rc geninfo_all_blocks=1 00:05:54.902 --rc geninfo_unexecuted_blocks=1 00:05:54.902 00:05:54.902 ' 00:05:54.902 14:58:55 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:54.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.902 --rc genhtml_branch_coverage=1 00:05:54.902 --rc genhtml_function_coverage=1 00:05:54.902 --rc genhtml_legend=1 00:05:54.902 --rc geninfo_all_blocks=1 00:05:54.902 --rc geninfo_unexecuted_blocks=1 00:05:54.902 00:05:54.902 ' 00:05:54.902 14:58:55 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:54.902 14:58:55 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59166 00:05:54.902 14:58:55 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:54.902 14:58:55 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:54.902 14:58:55 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59166 00:05:54.902 14:58:55 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 59166 ']' 00:05:54.902 14:58:55 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.902 14:58:55 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.902 14:58:55 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.902 14:58:55 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.902 14:58:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:54.902 [2024-11-20 14:58:55.699418] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:05:54.902 [2024-11-20 14:58:55.699577] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59166 ] 00:05:55.161 [2024-11-20 14:58:55.887343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:55.420 [2024-11-20 14:58:56.040697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.420 [2024-11-20 14:58:56.040801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.420 [2024-11-20 14:58:56.040964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:55.420 [2024-11-20 14:58:56.041061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:55.986 14:58:56 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.986 14:58:56 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:55.986 14:58:56 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:55.986 14:58:56 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.986 14:58:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:55.986 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:55.986 POWER: Cannot set governor of lcore 0 to userspace 00:05:55.986 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:55.986 POWER: Cannot set governor of lcore 0 to performance 00:05:55.986 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:55.986 POWER: Cannot set governor of lcore 0 to userspace 00:05:55.986 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:55.986 POWER: Cannot set governor of lcore 0 to userspace 00:05:55.986 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:55.986 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:55.986 POWER: Unable to set Power Management Environment for lcore 0 00:05:55.986 [2024-11-20 14:58:56.558714] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:55.986 [2024-11-20 14:58:56.558764] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:55.986 [2024-11-20 14:58:56.558778] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:55.986 [2024-11-20 14:58:56.558803] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:55.986 [2024-11-20 14:58:56.558814] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:55.986 [2024-11-20 14:58:56.558828] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:55.986 14:58:56 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.986 14:58:56 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:55.986 14:58:56 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.986 14:58:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:56.244 [2024-11-20 14:58:56.955601] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:56.244 14:58:56 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.244 14:58:56 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:56.244 14:58:56 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.244 14:58:56 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.244 14:58:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:56.244 ************************************ 00:05:56.244 START TEST scheduler_create_thread 00:05:56.244 ************************************ 00:05:56.244 14:58:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:56.244 14:58:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:56.244 14:58:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.244 14:58:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.244 2 00:05:56.244 14:58:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.244 14:58:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:56.244 14:58:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.244 14:58:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.244 3 00:05:56.244 14:58:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.244 14:58:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:56.244 14:58:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.244 14:58:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.244 4 00:05:56.244 14:58:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.244 14:58:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:56.244 14:58:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.244 14:58:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.244 5 00:05:56.244 14:58:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.245 14:58:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:56.245 14:58:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.245 14:58:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.245 6 00:05:56.245 14:58:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.245 14:58:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:56.245 14:58:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.245 14:58:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.245 7 00:05:56.245 14:58:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.245 14:58:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:56.245 14:58:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.245 14:58:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.245 8 00:05:56.245 14:58:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.245 14:58:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:56.245 14:58:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.245 14:58:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.504 9 00:05:56.504 14:58:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.504 14:58:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:56.504 14:58:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.504 14:58:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.072 10 00:05:57.072 14:58:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.072 14:58:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:57.072 14:58:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.072 14:58:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.448 14:58:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.448 14:58:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:58.448 14:58:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:58.448 14:58:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.448 14:58:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.392 14:58:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.392 14:58:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:59.392 14:58:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.392 14:58:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.958 14:59:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.958 14:59:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:59.958 14:59:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:59.958 14:59:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.958 14:59:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:00.892 14:59:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.892 00:06:00.892 real 0m4.391s 00:06:00.892 user 0m0.018s 00:06:00.892 sys 0m0.021s 00:06:00.892 14:59:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.892 ************************************ 00:06:00.892 END TEST scheduler_create_thread 00:06:00.892 ************************************ 00:06:00.892 14:59:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:00.892 14:59:01 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:00.892 14:59:01 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59166 00:06:00.892 14:59:01 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 59166 ']' 00:06:00.892 14:59:01 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 59166 00:06:00.892 14:59:01 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:00.892 14:59:01 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:00.892 14:59:01 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59166 00:06:00.892 14:59:01 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:00.892 14:59:01 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:00.892 killing process with pid 59166 00:06:00.892 14:59:01 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59166' 00:06:00.892 14:59:01 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 59166 00:06:00.892 14:59:01 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 59166 00:06:00.892 [2024-11-20 14:59:01.645918] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:02.268 00:06:02.268 real 0m7.620s 00:06:02.268 user 0m17.370s 00:06:02.268 sys 0m0.679s 00:06:02.268 14:59:02 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.268 ************************************ 00:06:02.268 END TEST event_scheduler 00:06:02.268 ************************************ 00:06:02.268 14:59:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:02.268 14:59:03 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:02.268 14:59:03 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:02.268 14:59:03 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.268 14:59:03 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.268 14:59:03 event -- common/autotest_common.sh@10 -- # set +x 00:06:02.268 ************************************ 00:06:02.268 START TEST app_repeat 00:06:02.268 ************************************ 00:06:02.268 14:59:03 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:02.268 14:59:03 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.268 14:59:03 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.268 14:59:03 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:02.268 14:59:03 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:02.268 14:59:03 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:02.268 14:59:03 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:02.268 14:59:03 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:02.268 14:59:03 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59294 00:06:02.268 14:59:03 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:02.268 14:59:03 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:02.268 Process app_repeat pid: 59294 00:06:02.268 14:59:03 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59294' 00:06:02.268 14:59:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:02.268 spdk_app_start Round 0 00:06:02.268 14:59:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:02.268 14:59:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59294 /var/tmp/spdk-nbd.sock 00:06:02.268 14:59:03 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59294 ']' 00:06:02.268 14:59:03 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:02.268 14:59:03 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:02.268 14:59:03 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:02.268 14:59:03 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.268 14:59:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:02.527 [2024-11-20 14:59:03.142682] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:06:02.527 [2024-11-20 14:59:03.143420] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59294 ] 00:06:02.527 [2024-11-20 14:59:03.334057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:02.785 [2024-11-20 14:59:03.483822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.785 [2024-11-20 14:59:03.483850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.352 14:59:04 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.352 14:59:04 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:03.352 14:59:04 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:03.610 Malloc0 00:06:03.610 14:59:04 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:03.868 Malloc1 00:06:03.868 14:59:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:03.868 14:59:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.868 14:59:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:03.868 14:59:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:03.868 14:59:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.868 14:59:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:03.868 14:59:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:03.868 14:59:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.868 14:59:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:03.868 14:59:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:03.868 14:59:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.868 14:59:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:03.868 14:59:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:03.868 14:59:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:03.868 14:59:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.868 14:59:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:04.126 /dev/nbd0 00:06:04.126 14:59:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:04.126 14:59:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:04.126 14:59:04 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:04.126 14:59:04 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:04.126 14:59:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:04.126 14:59:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:04.126 14:59:04 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:04.126 14:59:04 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:04.126 14:59:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:04.126 14:59:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:04.126 14:59:04 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:04.126 1+0 records in 00:06:04.126 1+0 records out 00:06:04.126 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000403264 s, 10.2 MB/s 00:06:04.126 14:59:04 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:04.126 14:59:04 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:04.126 14:59:04 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:04.126 14:59:04 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:04.126 14:59:04 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:04.126 14:59:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:04.126 14:59:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.126 14:59:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:04.384 /dev/nbd1 00:06:04.384 14:59:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:04.384 14:59:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:04.384 14:59:05 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:04.384 14:59:05 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:04.384 14:59:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:04.384 14:59:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:04.384 14:59:05 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:04.384 14:59:05 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:04.385 14:59:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:04.385 14:59:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:04.385 14:59:05 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:04.385 1+0 records in 00:06:04.385 1+0 records out 00:06:04.385 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000456507 s, 9.0 MB/s 00:06:04.385 14:59:05 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:04.385 14:59:05 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:04.385 14:59:05 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:04.385 14:59:05 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:04.385 14:59:05 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:04.385 14:59:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:04.385 14:59:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.385 14:59:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:04.385 14:59:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.643 14:59:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:04.643 14:59:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:04.643 { 00:06:04.643 "nbd_device": "/dev/nbd0", 00:06:04.643 "bdev_name": "Malloc0" 00:06:04.643 }, 00:06:04.643 { 00:06:04.643 "nbd_device": "/dev/nbd1", 00:06:04.643 "bdev_name": "Malloc1" 00:06:04.643 } 00:06:04.643 ]' 00:06:04.643 14:59:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:04.643 { 00:06:04.643 "nbd_device": "/dev/nbd0", 00:06:04.643 "bdev_name": "Malloc0" 00:06:04.643 }, 00:06:04.643 { 00:06:04.643 "nbd_device": "/dev/nbd1", 00:06:04.643 "bdev_name": "Malloc1" 00:06:04.643 } 00:06:04.643 ]' 00:06:04.643 14:59:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:04.900 14:59:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:04.901 /dev/nbd1' 00:06:04.901 14:59:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:04.901 14:59:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:04.901 /dev/nbd1' 00:06:04.901 14:59:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:04.901 14:59:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:04.901 14:59:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:04.901 14:59:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:04.901 14:59:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:04.901 14:59:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.901 14:59:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:04.901 14:59:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:04.901 14:59:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:04.901 14:59:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:04.901 14:59:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:04.901 256+0 records in 00:06:04.901 256+0 records out 00:06:04.901 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0126765 s, 82.7 MB/s 00:06:04.901 14:59:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:04.901 14:59:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:04.901 256+0 records in 00:06:04.901 256+0 records out 00:06:04.901 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0288499 s, 36.3 MB/s 00:06:04.901 14:59:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:04.901 14:59:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:04.901 256+0 records in 00:06:04.901 256+0 records out 00:06:04.901 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0351915 s, 29.8 MB/s 00:06:04.901 14:59:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:04.901 14:59:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.901 14:59:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:04.901 14:59:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:04.901 14:59:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:04.901 14:59:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:04.901 14:59:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:04.901 14:59:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:04.901 14:59:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:04.901 14:59:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:04.901 14:59:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:04.901 14:59:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:04.901 14:59:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:04.901 14:59:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.901 14:59:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.901 14:59:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:04.901 14:59:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:04.901 14:59:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:04.901 14:59:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:05.159 14:59:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:05.159 14:59:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:05.159 14:59:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:05.159 14:59:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:05.159 14:59:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:05.159 14:59:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:05.159 14:59:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:05.159 14:59:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:05.159 14:59:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:05.159 14:59:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:05.416 14:59:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:05.416 14:59:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:05.416 14:59:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:05.416 14:59:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:05.416 14:59:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:05.416 14:59:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:05.416 14:59:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:05.416 14:59:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:05.416 14:59:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:05.416 14:59:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.416 14:59:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:05.674 14:59:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:05.674 14:59:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:05.674 14:59:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:05.674 14:59:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:05.674 14:59:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:05.674 14:59:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:05.674 14:59:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:05.674 14:59:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:05.674 14:59:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:05.674 14:59:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:05.674 14:59:06 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:05.674 14:59:06 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:05.674 14:59:06 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:06.240 14:59:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:07.714 [2024-11-20 14:59:08.136488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:07.714 [2024-11-20 14:59:08.279301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.714 [2024-11-20 14:59:08.279301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.714 [2024-11-20 14:59:08.512149] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:07.714 [2024-11-20 14:59:08.512504] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:09.092 spdk_app_start Round 1 00:06:09.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:09.092 14:59:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:09.092 14:59:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:09.092 14:59:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59294 /var/tmp/spdk-nbd.sock 00:06:09.092 14:59:09 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59294 ']' 00:06:09.092 14:59:09 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:09.092 14:59:09 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:09.092 14:59:09 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:09.092 14:59:09 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:09.092 14:59:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:09.351 14:59:10 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.351 14:59:10 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:09.351 14:59:10 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:09.611 Malloc0 00:06:09.611 14:59:10 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:09.869 Malloc1 00:06:09.869 14:59:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:09.869 14:59:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.869 14:59:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:09.869 14:59:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:09.869 14:59:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.869 14:59:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:09.869 14:59:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:09.869 14:59:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.869 14:59:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:09.869 14:59:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:09.869 14:59:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.869 14:59:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:09.869 14:59:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:09.869 14:59:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:09.869 14:59:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:09.869 14:59:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:10.128 /dev/nbd0 00:06:10.128 14:59:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:10.128 14:59:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:10.128 14:59:10 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:10.128 14:59:10 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:10.128 14:59:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:10.128 14:59:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:10.128 14:59:10 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:10.128 14:59:10 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:10.128 14:59:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:10.128 14:59:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:10.128 14:59:10 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:10.128 1+0 records in 00:06:10.128 1+0 records out 00:06:10.128 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000389874 s, 10.5 MB/s 00:06:10.128 14:59:10 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:10.128 14:59:10 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:10.128 14:59:10 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:10.128 14:59:10 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:10.128 14:59:10 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:10.128 14:59:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:10.128 14:59:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.128 14:59:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:10.387 /dev/nbd1 00:06:10.387 14:59:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:10.387 14:59:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:10.387 14:59:11 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:10.387 14:59:11 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:10.387 14:59:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:10.387 14:59:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:10.387 14:59:11 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:10.387 14:59:11 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:10.387 14:59:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:10.387 14:59:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:10.387 14:59:11 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:10.387 1+0 records in 00:06:10.387 1+0 records out 00:06:10.387 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283472 s, 14.4 MB/s 00:06:10.387 14:59:11 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:10.387 14:59:11 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:10.387 14:59:11 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:10.387 14:59:11 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:10.387 14:59:11 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:10.387 14:59:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:10.387 14:59:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.388 14:59:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:10.388 14:59:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.388 14:59:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:10.646 14:59:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:10.646 { 00:06:10.646 "nbd_device": "/dev/nbd0", 00:06:10.646 "bdev_name": "Malloc0" 00:06:10.646 }, 00:06:10.646 { 00:06:10.646 "nbd_device": "/dev/nbd1", 00:06:10.646 "bdev_name": "Malloc1" 00:06:10.646 } 00:06:10.646 ]' 00:06:10.646 14:59:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:10.646 { 00:06:10.646 "nbd_device": "/dev/nbd0", 00:06:10.646 "bdev_name": "Malloc0" 00:06:10.646 }, 00:06:10.646 { 00:06:10.646 "nbd_device": "/dev/nbd1", 00:06:10.647 "bdev_name": "Malloc1" 00:06:10.647 } 00:06:10.647 ]' 00:06:10.647 14:59:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:10.906 14:59:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:10.906 /dev/nbd1' 00:06:10.906 14:59:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:10.906 /dev/nbd1' 00:06:10.906 14:59:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:10.906 14:59:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:10.906 14:59:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:10.906 14:59:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:10.906 14:59:11 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:10.906 14:59:11 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:10.906 14:59:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.906 14:59:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:10.906 14:59:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:10.906 14:59:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:10.906 14:59:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:10.906 14:59:11 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:10.906 256+0 records in 00:06:10.906 256+0 records out 00:06:10.906 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0133062 s, 78.8 MB/s 00:06:10.906 14:59:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:10.906 14:59:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:10.906 256+0 records in 00:06:10.906 256+0 records out 00:06:10.907 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0325606 s, 32.2 MB/s 00:06:10.907 14:59:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:10.907 14:59:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:10.907 256+0 records in 00:06:10.907 256+0 records out 00:06:10.907 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0348111 s, 30.1 MB/s 00:06:10.907 14:59:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:10.907 14:59:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.907 14:59:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:10.907 14:59:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:10.907 14:59:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:10.907 14:59:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:10.907 14:59:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:10.907 14:59:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:10.907 14:59:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:10.907 14:59:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:10.907 14:59:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:10.907 14:59:11 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:10.907 14:59:11 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:10.907 14:59:11 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.907 14:59:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.907 14:59:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:10.907 14:59:11 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:10.907 14:59:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.907 14:59:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:11.166 14:59:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:11.166 14:59:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:11.166 14:59:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:11.166 14:59:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.166 14:59:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.166 14:59:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:11.166 14:59:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:11.166 14:59:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.166 14:59:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.166 14:59:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:11.425 14:59:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:11.425 14:59:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:11.425 14:59:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:11.425 14:59:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.425 14:59:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.425 14:59:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:11.425 14:59:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:11.425 14:59:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.425 14:59:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:11.425 14:59:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.425 14:59:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:11.684 14:59:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:11.684 14:59:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:11.684 14:59:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:11.684 14:59:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:11.684 14:59:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:11.684 14:59:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:11.684 14:59:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:11.684 14:59:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:11.684 14:59:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:11.684 14:59:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:11.684 14:59:12 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:11.684 14:59:12 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:11.684 14:59:12 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:12.254 14:59:12 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:13.630 [2024-11-20 14:59:14.139208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:13.630 [2024-11-20 14:59:14.278177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.630 [2024-11-20 14:59:14.278198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.888 [2024-11-20 14:59:14.508690] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:13.888 [2024-11-20 14:59:14.508808] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:15.264 14:59:15 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:15.264 spdk_app_start Round 2 00:06:15.264 14:59:15 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:15.264 14:59:15 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59294 /var/tmp/spdk-nbd.sock 00:06:15.264 14:59:15 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59294 ']' 00:06:15.264 14:59:15 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:15.264 14:59:15 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:15.264 14:59:15 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:15.264 14:59:15 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.264 14:59:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:15.264 14:59:16 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.264 14:59:16 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:15.264 14:59:16 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:15.831 Malloc0 00:06:15.831 14:59:16 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:16.090 Malloc1 00:06:16.090 14:59:16 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:16.090 14:59:16 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.090 14:59:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:16.090 14:59:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:16.090 14:59:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.090 14:59:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:16.090 14:59:16 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:16.090 14:59:16 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.090 14:59:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:16.090 14:59:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:16.090 14:59:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.090 14:59:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:16.090 14:59:16 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:16.090 14:59:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:16.090 14:59:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.090 14:59:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:16.348 /dev/nbd0 00:06:16.348 14:59:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:16.348 14:59:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:16.349 14:59:16 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:16.349 14:59:16 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:16.349 14:59:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:16.349 14:59:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:16.349 14:59:16 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:16.349 14:59:16 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:16.349 14:59:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:16.349 14:59:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:16.349 14:59:16 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:16.349 1+0 records in 00:06:16.349 1+0 records out 00:06:16.349 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000363624 s, 11.3 MB/s 00:06:16.349 14:59:16 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:16.349 14:59:16 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:16.349 14:59:16 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:16.349 14:59:16 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:16.349 14:59:16 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:16.349 14:59:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.349 14:59:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.349 14:59:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:16.607 /dev/nbd1 00:06:16.607 14:59:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:16.607 14:59:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:16.607 14:59:17 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:16.607 14:59:17 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:16.607 14:59:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:16.607 14:59:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:16.607 14:59:17 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:16.607 14:59:17 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:16.607 14:59:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:16.607 14:59:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:16.607 14:59:17 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:16.607 1+0 records in 00:06:16.607 1+0 records out 00:06:16.607 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000481244 s, 8.5 MB/s 00:06:16.607 14:59:17 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:16.607 14:59:17 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:16.607 14:59:17 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:16.607 14:59:17 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:16.607 14:59:17 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:16.607 14:59:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.607 14:59:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.607 14:59:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:16.607 14:59:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.607 14:59:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:16.867 14:59:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:16.867 { 00:06:16.867 "nbd_device": "/dev/nbd0", 00:06:16.867 "bdev_name": "Malloc0" 00:06:16.867 }, 00:06:16.867 { 00:06:16.867 "nbd_device": "/dev/nbd1", 00:06:16.867 "bdev_name": "Malloc1" 00:06:16.867 } 00:06:16.867 ]' 00:06:16.867 14:59:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:16.867 { 00:06:16.867 "nbd_device": "/dev/nbd0", 00:06:16.867 "bdev_name": "Malloc0" 00:06:16.867 }, 00:06:16.867 { 00:06:16.867 "nbd_device": "/dev/nbd1", 00:06:16.867 "bdev_name": "Malloc1" 00:06:16.867 } 00:06:16.867 ]' 00:06:16.867 14:59:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:16.867 14:59:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:16.867 /dev/nbd1' 00:06:16.867 14:59:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:16.867 /dev/nbd1' 00:06:16.867 14:59:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:16.867 14:59:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:16.867 14:59:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:16.867 14:59:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:16.867 14:59:17 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:16.867 14:59:17 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:16.867 14:59:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.867 14:59:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:16.867 14:59:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:16.867 14:59:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:16.867 14:59:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:16.867 14:59:17 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:16.867 256+0 records in 00:06:16.867 256+0 records out 00:06:16.867 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012784 s, 82.0 MB/s 00:06:16.867 14:59:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.867 14:59:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:16.867 256+0 records in 00:06:16.867 256+0 records out 00:06:16.867 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0270796 s, 38.7 MB/s 00:06:16.867 14:59:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.867 14:59:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:17.127 256+0 records in 00:06:17.127 256+0 records out 00:06:17.127 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0313026 s, 33.5 MB/s 00:06:17.127 14:59:17 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:17.127 14:59:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.127 14:59:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:17.127 14:59:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:17.127 14:59:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:17.127 14:59:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:17.127 14:59:17 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:17.127 14:59:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.127 14:59:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:17.127 14:59:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.127 14:59:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:17.127 14:59:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:17.127 14:59:17 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:17.127 14:59:17 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.127 14:59:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.127 14:59:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:17.127 14:59:17 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:17.127 14:59:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.127 14:59:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:17.386 14:59:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:17.386 14:59:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:17.386 14:59:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:17.386 14:59:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.386 14:59:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.386 14:59:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:17.386 14:59:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:17.386 14:59:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.386 14:59:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.386 14:59:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:17.386 14:59:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:17.386 14:59:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:17.386 14:59:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:17.386 14:59:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.386 14:59:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.386 14:59:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:17.386 14:59:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:17.386 14:59:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.386 14:59:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:17.386 14:59:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.386 14:59:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:17.645 14:59:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:17.645 14:59:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:17.645 14:59:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:17.645 14:59:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:17.645 14:59:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:17.645 14:59:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:17.903 14:59:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:17.903 14:59:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:17.903 14:59:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:17.903 14:59:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:17.903 14:59:18 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:17.903 14:59:18 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:17.903 14:59:18 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:18.162 14:59:18 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:19.542 [2024-11-20 14:59:20.197924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:19.542 [2024-11-20 14:59:20.340972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.542 [2024-11-20 14:59:20.340973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.801 [2024-11-20 14:59:20.572300] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:19.801 [2024-11-20 14:59:20.572420] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:21.202 14:59:21 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59294 /var/tmp/spdk-nbd.sock 00:06:21.202 14:59:21 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59294 ']' 00:06:21.202 14:59:21 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:21.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:21.202 14:59:21 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.202 14:59:21 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:21.202 14:59:21 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.202 14:59:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:21.460 14:59:22 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.460 14:59:22 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:21.460 14:59:22 event.app_repeat -- event/event.sh@39 -- # killprocess 59294 00:06:21.460 14:59:22 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 59294 ']' 00:06:21.460 14:59:22 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 59294 00:06:21.460 14:59:22 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:21.460 14:59:22 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:21.460 14:59:22 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59294 00:06:21.460 14:59:22 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:21.460 killing process with pid 59294 00:06:21.460 14:59:22 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:21.460 14:59:22 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59294' 00:06:21.460 14:59:22 event.app_repeat -- common/autotest_common.sh@973 -- # kill 59294 00:06:21.460 14:59:22 event.app_repeat -- common/autotest_common.sh@978 -- # wait 59294 00:06:22.834 spdk_app_start is called in Round 0. 00:06:22.834 Shutdown signal received, stop current app iteration 00:06:22.834 Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 reinitialization... 00:06:22.834 spdk_app_start is called in Round 1. 00:06:22.834 Shutdown signal received, stop current app iteration 00:06:22.834 Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 reinitialization... 00:06:22.834 spdk_app_start is called in Round 2. 00:06:22.834 Shutdown signal received, stop current app iteration 00:06:22.834 Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 reinitialization... 00:06:22.834 spdk_app_start is called in Round 3. 00:06:22.834 Shutdown signal received, stop current app iteration 00:06:22.834 14:59:23 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:22.834 14:59:23 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:22.834 ************************************ 00:06:22.834 END TEST app_repeat 00:06:22.835 ************************************ 00:06:22.835 00:06:22.835 real 0m20.299s 00:06:22.835 user 0m42.712s 00:06:22.835 sys 0m3.823s 00:06:22.835 14:59:23 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.835 14:59:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:22.835 14:59:23 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:22.835 14:59:23 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:22.835 14:59:23 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.835 14:59:23 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.835 14:59:23 event -- common/autotest_common.sh@10 -- # set +x 00:06:22.835 ************************************ 00:06:22.835 START TEST cpu_locks 00:06:22.835 ************************************ 00:06:22.835 14:59:23 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:22.835 * Looking for test storage... 00:06:22.835 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:22.835 14:59:23 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:22.835 14:59:23 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:22.835 14:59:23 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:23.094 14:59:23 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:23.094 14:59:23 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.094 14:59:23 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.094 14:59:23 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.094 14:59:23 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.094 14:59:23 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.094 14:59:23 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.094 14:59:23 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.094 14:59:23 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.094 14:59:23 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.094 14:59:23 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.094 14:59:23 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.094 14:59:23 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:23.094 14:59:23 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:23.094 14:59:23 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.094 14:59:23 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.094 14:59:23 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:23.094 14:59:23 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:23.094 14:59:23 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.094 14:59:23 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:23.094 14:59:23 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.094 14:59:23 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:23.094 14:59:23 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:23.094 14:59:23 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.094 14:59:23 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:23.094 14:59:23 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.094 14:59:23 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.094 14:59:23 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.094 14:59:23 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:23.094 14:59:23 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.094 14:59:23 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:23.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.094 --rc genhtml_branch_coverage=1 00:06:23.094 --rc genhtml_function_coverage=1 00:06:23.094 --rc genhtml_legend=1 00:06:23.094 --rc geninfo_all_blocks=1 00:06:23.094 --rc geninfo_unexecuted_blocks=1 00:06:23.094 00:06:23.094 ' 00:06:23.094 14:59:23 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:23.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.094 --rc genhtml_branch_coverage=1 00:06:23.094 --rc genhtml_function_coverage=1 00:06:23.094 --rc genhtml_legend=1 00:06:23.094 --rc geninfo_all_blocks=1 00:06:23.094 --rc geninfo_unexecuted_blocks=1 00:06:23.094 00:06:23.094 ' 00:06:23.094 14:59:23 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:23.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.094 --rc genhtml_branch_coverage=1 00:06:23.094 --rc genhtml_function_coverage=1 00:06:23.094 --rc genhtml_legend=1 00:06:23.094 --rc geninfo_all_blocks=1 00:06:23.094 --rc geninfo_unexecuted_blocks=1 00:06:23.094 00:06:23.094 ' 00:06:23.094 14:59:23 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:23.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.094 --rc genhtml_branch_coverage=1 00:06:23.094 --rc genhtml_function_coverage=1 00:06:23.094 --rc genhtml_legend=1 00:06:23.094 --rc geninfo_all_blocks=1 00:06:23.094 --rc geninfo_unexecuted_blocks=1 00:06:23.094 00:06:23.094 ' 00:06:23.094 14:59:23 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:23.094 14:59:23 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:23.094 14:59:23 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:23.094 14:59:23 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:23.094 14:59:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.094 14:59:23 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.094 14:59:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.094 ************************************ 00:06:23.094 START TEST default_locks 00:06:23.094 ************************************ 00:06:23.094 14:59:23 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:23.094 14:59:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:23.094 14:59:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59754 00:06:23.094 14:59:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59754 00:06:23.094 14:59:23 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59754 ']' 00:06:23.094 14:59:23 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.095 14:59:23 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.095 14:59:23 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.095 14:59:23 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.095 14:59:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.095 [2024-11-20 14:59:23.861275] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:06:23.095 [2024-11-20 14:59:23.862372] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59754 ] 00:06:23.354 [2024-11-20 14:59:24.062009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.614 [2024-11-20 14:59:24.215783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.578 14:59:25 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.578 14:59:25 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:24.578 14:59:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59754 00:06:24.578 14:59:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:24.578 14:59:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59754 00:06:25.147 14:59:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59754 00:06:25.147 14:59:25 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 59754 ']' 00:06:25.147 14:59:25 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 59754 00:06:25.147 14:59:25 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:25.147 14:59:25 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:25.147 14:59:25 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59754 00:06:25.147 14:59:25 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:25.147 14:59:25 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:25.147 killing process with pid 59754 00:06:25.147 14:59:25 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59754' 00:06:25.147 14:59:25 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 59754 00:06:25.147 14:59:25 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 59754 00:06:27.684 14:59:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59754 00:06:27.684 14:59:28 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:27.684 14:59:28 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59754 00:06:27.684 14:59:28 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:27.684 14:59:28 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.684 14:59:28 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:27.684 14:59:28 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.684 14:59:28 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 59754 00:06:27.684 14:59:28 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59754 ']' 00:06:27.684 14:59:28 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.684 14:59:28 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.684 14:59:28 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.684 14:59:28 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.684 14:59:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.684 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59754) - No such process 00:06:27.684 ERROR: process (pid: 59754) is no longer running 00:06:27.685 14:59:28 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.685 14:59:28 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:27.685 14:59:28 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:27.685 14:59:28 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:27.685 14:59:28 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:27.685 14:59:28 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:27.685 14:59:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:27.685 14:59:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:27.685 14:59:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:27.685 14:59:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:27.685 00:06:27.685 real 0m4.709s 00:06:27.685 user 0m4.532s 00:06:27.685 sys 0m0.893s 00:06:27.685 14:59:28 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.685 ************************************ 00:06:27.685 END TEST default_locks 00:06:27.685 ************************************ 00:06:27.685 14:59:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.685 14:59:28 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:27.685 14:59:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:27.685 14:59:28 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.685 14:59:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.685 ************************************ 00:06:27.685 START TEST default_locks_via_rpc 00:06:27.685 ************************************ 00:06:27.685 14:59:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:27.685 14:59:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59836 00:06:27.685 14:59:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:27.685 14:59:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59836 00:06:27.685 14:59:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59836 ']' 00:06:27.685 14:59:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.685 14:59:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.685 14:59:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.685 14:59:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.685 14:59:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.944 [2024-11-20 14:59:28.620049] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:06:27.944 [2024-11-20 14:59:28.620222] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59836 ] 00:06:28.203 [2024-11-20 14:59:28.816853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.203 [2024-11-20 14:59:28.966209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.242 14:59:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.242 14:59:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:29.242 14:59:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:29.242 14:59:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.242 14:59:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.242 14:59:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.242 14:59:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:29.242 14:59:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:29.242 14:59:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:29.242 14:59:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:29.242 14:59:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:29.242 14:59:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.242 14:59:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.242 14:59:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.242 14:59:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59836 00:06:29.242 14:59:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59836 00:06:29.242 14:59:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:29.811 14:59:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59836 00:06:29.811 14:59:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 59836 ']' 00:06:29.811 14:59:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 59836 00:06:29.811 14:59:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:29.811 14:59:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:29.811 14:59:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59836 00:06:29.811 14:59:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:29.811 14:59:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:29.811 killing process with pid 59836 00:06:29.811 14:59:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59836' 00:06:29.811 14:59:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 59836 00:06:29.811 14:59:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 59836 00:06:33.100 00:06:33.100 real 0m4.749s 00:06:33.100 user 0m4.524s 00:06:33.100 sys 0m0.933s 00:06:33.100 14:59:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.100 14:59:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.100 ************************************ 00:06:33.100 END TEST default_locks_via_rpc 00:06:33.100 ************************************ 00:06:33.100 14:59:33 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:33.100 14:59:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:33.100 14:59:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.100 14:59:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.100 ************************************ 00:06:33.100 START TEST non_locking_app_on_locked_coremask 00:06:33.100 ************************************ 00:06:33.100 14:59:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:33.101 14:59:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59923 00:06:33.101 14:59:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:33.101 14:59:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59923 /var/tmp/spdk.sock 00:06:33.101 14:59:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59923 ']' 00:06:33.101 14:59:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.101 14:59:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.101 14:59:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.101 14:59:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.101 14:59:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.101 [2024-11-20 14:59:33.444339] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:06:33.101 [2024-11-20 14:59:33.444517] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59923 ] 00:06:33.101 [2024-11-20 14:59:33.633487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.101 [2024-11-20 14:59:33.776629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.041 14:59:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.041 14:59:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:34.041 14:59:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:34.041 14:59:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59939 00:06:34.041 14:59:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59939 /var/tmp/spdk2.sock 00:06:34.041 14:59:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59939 ']' 00:06:34.041 14:59:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:34.041 14:59:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:34.041 14:59:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:34.041 14:59:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.041 14:59:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:34.299 [2024-11-20 14:59:34.966447] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:06:34.299 [2024-11-20 14:59:34.966607] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59939 ] 00:06:34.558 [2024-11-20 14:59:35.157528] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:34.558 [2024-11-20 14:59:35.157604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.817 [2024-11-20 14:59:35.453050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.351 14:59:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.351 14:59:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:37.351 14:59:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59923 00:06:37.351 14:59:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59923 00:06:37.351 14:59:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:37.919 14:59:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59923 00:06:37.919 14:59:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59923 ']' 00:06:37.919 14:59:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59923 00:06:37.919 14:59:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:37.919 14:59:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:37.920 14:59:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59923 00:06:37.920 14:59:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:37.920 14:59:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:37.920 killing process with pid 59923 00:06:37.920 14:59:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59923' 00:06:37.920 14:59:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59923 00:06:37.920 14:59:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59923 00:06:43.257 14:59:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59939 00:06:43.257 14:59:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59939 ']' 00:06:43.257 14:59:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59939 00:06:43.257 14:59:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:43.257 14:59:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:43.257 14:59:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59939 00:06:43.257 14:59:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:43.257 14:59:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:43.257 14:59:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59939' 00:06:43.257 killing process with pid 59939 00:06:43.257 14:59:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59939 00:06:43.257 14:59:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59939 00:06:45.811 00:06:45.811 real 0m13.226s 00:06:45.811 user 0m13.261s 00:06:45.811 sys 0m1.858s 00:06:45.811 14:59:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.811 ************************************ 00:06:45.811 END TEST non_locking_app_on_locked_coremask 00:06:45.811 ************************************ 00:06:45.811 14:59:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.811 14:59:46 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:45.811 14:59:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:45.811 14:59:46 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.811 14:59:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.811 ************************************ 00:06:45.811 START TEST locking_app_on_unlocked_coremask 00:06:45.811 ************************************ 00:06:45.811 14:59:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:45.811 14:59:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60099 00:06:45.811 14:59:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:45.811 14:59:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60099 /var/tmp/spdk.sock 00:06:45.811 14:59:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60099 ']' 00:06:45.811 14:59:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.811 14:59:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.811 14:59:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.811 14:59:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.811 14:59:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.070 [2024-11-20 14:59:46.750608] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:06:46.071 [2024-11-20 14:59:46.750777] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60099 ] 00:06:46.329 [2024-11-20 14:59:46.939232] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:46.329 [2024-11-20 14:59:46.939309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.329 [2024-11-20 14:59:47.076021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.265 14:59:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:47.265 14:59:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:47.265 14:59:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60121 00:06:47.265 14:59:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:47.265 14:59:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60121 /var/tmp/spdk2.sock 00:06:47.265 14:59:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60121 ']' 00:06:47.265 14:59:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:47.265 14:59:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:47.265 14:59:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:47.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:47.265 14:59:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:47.265 14:59:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.524 [2024-11-20 14:59:48.178167] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:06:47.524 [2024-11-20 14:59:48.178313] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60121 ] 00:06:47.784 [2024-11-20 14:59:48.367515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.043 [2024-11-20 14:59:48.654677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.578 14:59:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.578 14:59:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:50.578 14:59:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60121 00:06:50.578 14:59:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60121 00:06:50.578 14:59:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:51.147 14:59:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60099 00:06:51.147 14:59:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60099 ']' 00:06:51.147 14:59:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60099 00:06:51.147 14:59:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:51.147 14:59:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:51.147 14:59:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60099 00:06:51.147 14:59:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:51.147 14:59:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:51.147 killing process with pid 60099 00:06:51.147 14:59:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60099' 00:06:51.147 14:59:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60099 00:06:51.147 14:59:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60099 00:06:56.423 14:59:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60121 00:06:56.423 14:59:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60121 ']' 00:06:56.423 14:59:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60121 00:06:56.423 14:59:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:56.423 14:59:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:56.423 14:59:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60121 00:06:56.423 14:59:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:56.423 killing process with pid 60121 00:06:56.423 14:59:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:56.423 14:59:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60121' 00:06:56.423 14:59:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60121 00:06:56.423 14:59:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60121 00:06:58.960 00:06:58.960 real 0m13.026s 00:06:58.960 user 0m13.097s 00:06:58.960 sys 0m1.781s 00:06:58.960 14:59:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.960 14:59:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:58.960 ************************************ 00:06:58.960 END TEST locking_app_on_unlocked_coremask 00:06:58.960 ************************************ 00:06:58.960 14:59:59 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:58.960 14:59:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:58.960 14:59:59 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.960 14:59:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:58.960 ************************************ 00:06:58.960 START TEST locking_app_on_locked_coremask 00:06:58.960 ************************************ 00:06:58.960 14:59:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:58.960 14:59:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60280 00:06:58.960 14:59:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60280 /var/tmp/spdk.sock 00:06:58.960 14:59:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60280 ']' 00:06:58.960 14:59:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.960 14:59:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:58.960 14:59:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.960 14:59:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:58.960 14:59:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:58.960 14:59:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:59.218 [2024-11-20 14:59:59.835603] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:06:59.218 [2024-11-20 14:59:59.836403] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60280 ] 00:06:59.218 [2024-11-20 15:00:00.008668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.507 [2024-11-20 15:00:00.154151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.469 15:00:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:00.469 15:00:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:00.469 15:00:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60307 00:07:00.469 15:00:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60307 /var/tmp/spdk2.sock 00:07:00.469 15:00:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:00.469 15:00:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60307 /var/tmp/spdk2.sock 00:07:00.469 15:00:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:00.469 15:00:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:00.469 15:00:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:00.469 15:00:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:00.469 15:00:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:00.469 15:00:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60307 /var/tmp/spdk2.sock 00:07:00.469 15:00:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60307 ']' 00:07:00.469 15:00:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:00.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:00.469 15:00:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:00.469 15:00:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:00.469 15:00:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:00.469 15:00:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:00.728 [2024-11-20 15:00:01.315244] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:07:00.728 [2024-11-20 15:00:01.315391] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60307 ] 00:07:00.728 [2024-11-20 15:00:01.504285] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60280 has claimed it. 00:07:00.728 [2024-11-20 15:00:01.504367] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:01.294 ERROR: process (pid: 60307) is no longer running 00:07:01.294 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60307) - No such process 00:07:01.294 15:00:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.294 15:00:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:01.294 15:00:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:01.294 15:00:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:01.294 15:00:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:01.294 15:00:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:01.294 15:00:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60280 00:07:01.294 15:00:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60280 00:07:01.294 15:00:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:01.863 15:00:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60280 00:07:01.863 15:00:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60280 ']' 00:07:01.863 15:00:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60280 00:07:01.863 15:00:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:01.863 15:00:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:01.863 15:00:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60280 00:07:01.863 killing process with pid 60280 00:07:01.863 15:00:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:01.863 15:00:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:01.863 15:00:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60280' 00:07:01.863 15:00:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60280 00:07:01.863 15:00:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60280 00:07:04.398 00:07:04.398 real 0m5.408s 00:07:04.398 user 0m5.522s 00:07:04.398 sys 0m1.077s 00:07:04.399 15:00:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.399 ************************************ 00:07:04.399 END TEST locking_app_on_locked_coremask 00:07:04.399 ************************************ 00:07:04.399 15:00:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:04.399 15:00:05 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:04.399 15:00:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:04.399 15:00:05 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.399 15:00:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:04.399 ************************************ 00:07:04.399 START TEST locking_overlapped_coremask 00:07:04.399 ************************************ 00:07:04.399 15:00:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:07:04.399 15:00:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60376 00:07:04.399 15:00:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:04.399 15:00:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60376 /var/tmp/spdk.sock 00:07:04.399 15:00:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60376 ']' 00:07:04.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.399 15:00:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.399 15:00:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:04.399 15:00:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.399 15:00:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:04.399 15:00:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:04.657 [2024-11-20 15:00:05.307929] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:07:04.657 [2024-11-20 15:00:05.308084] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60376 ] 00:07:04.916 [2024-11-20 15:00:05.495536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:04.916 [2024-11-20 15:00:05.649753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.916 [2024-11-20 15:00:05.649877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.916 [2024-11-20 15:00:05.649916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:06.297 15:00:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.297 15:00:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:06.297 15:00:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60400 00:07:06.297 15:00:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60400 /var/tmp/spdk2.sock 00:07:06.297 15:00:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:06.297 15:00:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:06.297 15:00:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60400 /var/tmp/spdk2.sock 00:07:06.297 15:00:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:06.297 15:00:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:06.297 15:00:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:06.297 15:00:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:06.297 15:00:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60400 /var/tmp/spdk2.sock 00:07:06.297 15:00:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60400 ']' 00:07:06.297 15:00:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:06.297 15:00:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.297 15:00:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:06.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:06.297 15:00:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.297 15:00:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:06.297 [2024-11-20 15:00:06.837489] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:07:06.297 [2024-11-20 15:00:06.837913] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60400 ] 00:07:06.297 [2024-11-20 15:00:07.029670] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60376 has claimed it. 00:07:06.297 [2024-11-20 15:00:07.029777] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:06.865 ERROR: process (pid: 60400) is no longer running 00:07:06.865 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60400) - No such process 00:07:06.865 15:00:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.865 15:00:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:06.865 15:00:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:06.865 15:00:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:06.865 15:00:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:06.865 15:00:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:06.865 15:00:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:06.865 15:00:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:06.865 15:00:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:06.865 15:00:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:06.865 15:00:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60376 00:07:06.865 15:00:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 60376 ']' 00:07:06.865 15:00:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 60376 00:07:06.865 15:00:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:06.865 15:00:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:06.865 15:00:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60376 00:07:06.865 15:00:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:06.865 15:00:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:06.865 15:00:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60376' 00:07:06.865 killing process with pid 60376 00:07:06.865 15:00:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 60376 00:07:06.865 15:00:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 60376 00:07:09.399 00:07:09.399 real 0m4.984s 00:07:09.399 user 0m13.361s 00:07:09.399 sys 0m0.827s 00:07:09.400 ************************************ 00:07:09.400 END TEST locking_overlapped_coremask 00:07:09.400 ************************************ 00:07:09.400 15:00:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:09.400 15:00:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:09.658 15:00:10 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:09.658 15:00:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:09.658 15:00:10 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.658 15:00:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:09.658 ************************************ 00:07:09.658 START TEST locking_overlapped_coremask_via_rpc 00:07:09.658 ************************************ 00:07:09.658 15:00:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:09.658 15:00:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60467 00:07:09.658 15:00:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:09.658 15:00:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60467 /var/tmp/spdk.sock 00:07:09.658 15:00:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60467 ']' 00:07:09.658 15:00:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.658 15:00:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:09.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.658 15:00:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.658 15:00:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:09.658 15:00:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.658 [2024-11-20 15:00:10.379133] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:07:09.658 [2024-11-20 15:00:10.379279] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60467 ] 00:07:09.918 [2024-11-20 15:00:10.568194] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:09.918 [2024-11-20 15:00:10.568252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:09.918 [2024-11-20 15:00:10.722598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.918 [2024-11-20 15:00:10.722757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.918 [2024-11-20 15:00:10.722823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:11.299 15:00:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:11.299 15:00:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:11.299 15:00:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:11.299 15:00:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60496 00:07:11.299 15:00:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60496 /var/tmp/spdk2.sock 00:07:11.299 15:00:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60496 ']' 00:07:11.299 15:00:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:11.299 15:00:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.299 15:00:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:11.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:11.299 15:00:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.300 15:00:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.300 [2024-11-20 15:00:11.895175] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:07:11.300 [2024-11-20 15:00:11.895556] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60496 ] 00:07:11.300 [2024-11-20 15:00:12.088792] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:11.300 [2024-11-20 15:00:12.088861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:11.558 [2024-11-20 15:00:12.364919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:11.558 [2024-11-20 15:00:12.367905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:11.559 [2024-11-20 15:00:12.367939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:14.094 15:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:14.094 15:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:14.094 15:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:14.094 15:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.094 15:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.094 15:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.094 15:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:14.094 15:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:14.094 15:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:14.094 15:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:14.094 15:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:14.094 15:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:14.094 15:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:14.094 15:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:14.094 15:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.094 15:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.094 [2024-11-20 15:00:14.488954] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60467 has claimed it. 00:07:14.094 request: 00:07:14.094 { 00:07:14.094 "method": "framework_enable_cpumask_locks", 00:07:14.094 "req_id": 1 00:07:14.094 } 00:07:14.094 Got JSON-RPC error response 00:07:14.094 response: 00:07:14.094 { 00:07:14.094 "code": -32603, 00:07:14.094 "message": "Failed to claim CPU core: 2" 00:07:14.094 } 00:07:14.094 15:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:14.094 15:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:14.094 15:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:14.094 15:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:14.094 15:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:14.094 15:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60467 /var/tmp/spdk.sock 00:07:14.094 15:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60467 ']' 00:07:14.094 15:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.094 15:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:14.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.094 15:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.094 15:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:14.094 15:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.094 15:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:14.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:14.094 15:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:14.094 15:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60496 /var/tmp/spdk2.sock 00:07:14.094 15:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60496 ']' 00:07:14.094 15:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:14.094 15:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:14.094 15:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:14.094 15:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:14.094 15:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.353 15:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:14.353 15:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:14.353 ************************************ 00:07:14.353 END TEST locking_overlapped_coremask_via_rpc 00:07:14.353 ************************************ 00:07:14.353 15:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:14.353 15:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:14.353 15:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:14.353 15:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:14.353 00:07:14.353 real 0m4.719s 00:07:14.353 user 0m1.274s 00:07:14.353 sys 0m0.278s 00:07:14.353 15:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.353 15:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.353 15:00:15 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:14.353 15:00:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60467 ]] 00:07:14.353 15:00:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60467 00:07:14.353 15:00:15 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60467 ']' 00:07:14.353 15:00:15 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60467 00:07:14.353 15:00:15 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:14.353 15:00:15 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:14.353 15:00:15 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60467 00:07:14.353 killing process with pid 60467 00:07:14.353 15:00:15 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:14.353 15:00:15 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:14.353 15:00:15 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60467' 00:07:14.353 15:00:15 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60467 00:07:14.353 15:00:15 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60467 00:07:16.903 15:00:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60496 ]] 00:07:16.903 15:00:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60496 00:07:16.903 15:00:17 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60496 ']' 00:07:16.903 15:00:17 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60496 00:07:16.903 15:00:17 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:16.903 15:00:17 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:16.903 15:00:17 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60496 00:07:16.903 killing process with pid 60496 00:07:16.903 15:00:17 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:16.903 15:00:17 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:16.903 15:00:17 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60496' 00:07:16.903 15:00:17 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60496 00:07:16.903 15:00:17 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60496 00:07:20.190 15:00:20 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:20.190 Process with pid 60467 is not found 00:07:20.190 15:00:20 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:20.190 15:00:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60467 ]] 00:07:20.190 15:00:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60467 00:07:20.190 15:00:20 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60467 ']' 00:07:20.190 15:00:20 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60467 00:07:20.190 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60467) - No such process 00:07:20.190 15:00:20 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60467 is not found' 00:07:20.190 15:00:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60496 ]] 00:07:20.190 15:00:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60496 00:07:20.190 15:00:20 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60496 ']' 00:07:20.190 15:00:20 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60496 00:07:20.190 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60496) - No such process 00:07:20.190 15:00:20 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60496 is not found' 00:07:20.190 Process with pid 60496 is not found 00:07:20.190 15:00:20 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:20.190 00:07:20.190 real 0m56.858s 00:07:20.190 user 1m33.243s 00:07:20.190 sys 0m9.339s 00:07:20.190 15:00:20 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.190 15:00:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:20.190 ************************************ 00:07:20.190 END TEST cpu_locks 00:07:20.190 ************************************ 00:07:20.190 ************************************ 00:07:20.190 END TEST event 00:07:20.190 ************************************ 00:07:20.190 00:07:20.190 real 1m30.385s 00:07:20.190 user 2m40.767s 00:07:20.190 sys 0m14.647s 00:07:20.190 15:00:20 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.190 15:00:20 event -- common/autotest_common.sh@10 -- # set +x 00:07:20.190 15:00:20 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:20.190 15:00:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:20.190 15:00:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.190 15:00:20 -- common/autotest_common.sh@10 -- # set +x 00:07:20.190 ************************************ 00:07:20.190 START TEST thread 00:07:20.190 ************************************ 00:07:20.190 15:00:20 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:20.190 * Looking for test storage... 00:07:20.190 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:20.190 15:00:20 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:20.190 15:00:20 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:07:20.190 15:00:20 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:20.190 15:00:20 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:20.190 15:00:20 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:20.190 15:00:20 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:20.190 15:00:20 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:20.190 15:00:20 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:20.190 15:00:20 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:20.190 15:00:20 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:20.190 15:00:20 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:20.190 15:00:20 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:20.190 15:00:20 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:20.190 15:00:20 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:20.190 15:00:20 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:20.190 15:00:20 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:20.190 15:00:20 thread -- scripts/common.sh@345 -- # : 1 00:07:20.190 15:00:20 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:20.190 15:00:20 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:20.190 15:00:20 thread -- scripts/common.sh@365 -- # decimal 1 00:07:20.190 15:00:20 thread -- scripts/common.sh@353 -- # local d=1 00:07:20.190 15:00:20 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:20.190 15:00:20 thread -- scripts/common.sh@355 -- # echo 1 00:07:20.190 15:00:20 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:20.190 15:00:20 thread -- scripts/common.sh@366 -- # decimal 2 00:07:20.190 15:00:20 thread -- scripts/common.sh@353 -- # local d=2 00:07:20.190 15:00:20 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:20.190 15:00:20 thread -- scripts/common.sh@355 -- # echo 2 00:07:20.190 15:00:20 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:20.190 15:00:20 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:20.190 15:00:20 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:20.190 15:00:20 thread -- scripts/common.sh@368 -- # return 0 00:07:20.190 15:00:20 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:20.190 15:00:20 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:20.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.190 --rc genhtml_branch_coverage=1 00:07:20.190 --rc genhtml_function_coverage=1 00:07:20.190 --rc genhtml_legend=1 00:07:20.190 --rc geninfo_all_blocks=1 00:07:20.190 --rc geninfo_unexecuted_blocks=1 00:07:20.190 00:07:20.190 ' 00:07:20.190 15:00:20 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:20.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.190 --rc genhtml_branch_coverage=1 00:07:20.190 --rc genhtml_function_coverage=1 00:07:20.190 --rc genhtml_legend=1 00:07:20.190 --rc geninfo_all_blocks=1 00:07:20.190 --rc geninfo_unexecuted_blocks=1 00:07:20.190 00:07:20.190 ' 00:07:20.190 15:00:20 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:20.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.190 --rc genhtml_branch_coverage=1 00:07:20.190 --rc genhtml_function_coverage=1 00:07:20.190 --rc genhtml_legend=1 00:07:20.190 --rc geninfo_all_blocks=1 00:07:20.191 --rc geninfo_unexecuted_blocks=1 00:07:20.191 00:07:20.191 ' 00:07:20.191 15:00:20 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:20.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.191 --rc genhtml_branch_coverage=1 00:07:20.191 --rc genhtml_function_coverage=1 00:07:20.191 --rc genhtml_legend=1 00:07:20.191 --rc geninfo_all_blocks=1 00:07:20.191 --rc geninfo_unexecuted_blocks=1 00:07:20.191 00:07:20.191 ' 00:07:20.191 15:00:20 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:20.191 15:00:20 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:20.191 15:00:20 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.191 15:00:20 thread -- common/autotest_common.sh@10 -- # set +x 00:07:20.191 ************************************ 00:07:20.191 START TEST thread_poller_perf 00:07:20.191 ************************************ 00:07:20.191 15:00:20 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:20.191 [2024-11-20 15:00:20.755623] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:07:20.191 [2024-11-20 15:00:20.755894] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60691 ] 00:07:20.191 [2024-11-20 15:00:20.945948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.449 [2024-11-20 15:00:21.090471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.449 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:21.827 [2024-11-20T15:00:22.663Z] ====================================== 00:07:21.827 [2024-11-20T15:00:22.664Z] busy:2502817860 (cyc) 00:07:21.828 [2024-11-20T15:00:22.664Z] total_run_count: 389000 00:07:21.828 [2024-11-20T15:00:22.664Z] tsc_hz: 2490000000 (cyc) 00:07:21.828 [2024-11-20T15:00:22.664Z] ====================================== 00:07:21.828 [2024-11-20T15:00:22.664Z] poller_cost: 6433 (cyc), 2583 (nsec) 00:07:21.828 00:07:21.828 real 0m1.651s 00:07:21.828 user 0m1.401s 00:07:21.828 sys 0m0.140s 00:07:21.828 ************************************ 00:07:21.828 END TEST thread_poller_perf 00:07:21.828 ************************************ 00:07:21.828 15:00:22 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.828 15:00:22 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:21.828 15:00:22 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:21.828 15:00:22 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:21.828 15:00:22 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.828 15:00:22 thread -- common/autotest_common.sh@10 -- # set +x 00:07:21.828 ************************************ 00:07:21.828 START TEST thread_poller_perf 00:07:21.828 ************************************ 00:07:21.828 15:00:22 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:21.828 [2024-11-20 15:00:22.477801] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:07:21.828 [2024-11-20 15:00:22.477942] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60733 ] 00:07:22.086 [2024-11-20 15:00:22.666520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.086 [2024-11-20 15:00:22.816296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.086 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:23.464 [2024-11-20T15:00:24.300Z] ====================================== 00:07:23.464 [2024-11-20T15:00:24.300Z] busy:2494427478 (cyc) 00:07:23.464 [2024-11-20T15:00:24.300Z] total_run_count: 5053000 00:07:23.464 [2024-11-20T15:00:24.300Z] tsc_hz: 2490000000 (cyc) 00:07:23.464 [2024-11-20T15:00:24.300Z] ====================================== 00:07:23.464 [2024-11-20T15:00:24.300Z] poller_cost: 493 (cyc), 197 (nsec) 00:07:23.464 00:07:23.464 real 0m1.648s 00:07:23.464 user 0m1.399s 00:07:23.464 sys 0m0.140s 00:07:23.464 15:00:24 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.464 15:00:24 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:23.464 ************************************ 00:07:23.464 END TEST thread_poller_perf 00:07:23.464 ************************************ 00:07:23.464 15:00:24 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:23.464 ************************************ 00:07:23.464 END TEST thread 00:07:23.464 ************************************ 00:07:23.464 00:07:23.464 real 0m3.697s 00:07:23.464 user 0m2.985s 00:07:23.464 sys 0m0.500s 00:07:23.464 15:00:24 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.464 15:00:24 thread -- common/autotest_common.sh@10 -- # set +x 00:07:23.464 15:00:24 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:23.464 15:00:24 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:23.464 15:00:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:23.464 15:00:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.464 15:00:24 -- common/autotest_common.sh@10 -- # set +x 00:07:23.464 ************************************ 00:07:23.464 START TEST app_cmdline 00:07:23.464 ************************************ 00:07:23.464 15:00:24 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:23.723 * Looking for test storage... 00:07:23.723 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:23.723 15:00:24 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:23.723 15:00:24 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:07:23.724 15:00:24 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:23.724 15:00:24 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:23.724 15:00:24 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:23.724 15:00:24 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:23.724 15:00:24 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:23.724 15:00:24 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:23.724 15:00:24 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:23.724 15:00:24 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:23.724 15:00:24 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:23.724 15:00:24 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:23.724 15:00:24 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:23.724 15:00:24 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:23.724 15:00:24 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:23.724 15:00:24 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:23.724 15:00:24 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:23.724 15:00:24 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:23.724 15:00:24 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:23.724 15:00:24 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:23.724 15:00:24 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:23.724 15:00:24 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:23.724 15:00:24 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:23.724 15:00:24 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:23.724 15:00:24 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:23.724 15:00:24 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:23.724 15:00:24 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:23.724 15:00:24 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:23.724 15:00:24 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:23.724 15:00:24 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:23.724 15:00:24 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:23.724 15:00:24 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:23.724 15:00:24 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:23.724 15:00:24 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:23.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.724 --rc genhtml_branch_coverage=1 00:07:23.724 --rc genhtml_function_coverage=1 00:07:23.724 --rc genhtml_legend=1 00:07:23.724 --rc geninfo_all_blocks=1 00:07:23.724 --rc geninfo_unexecuted_blocks=1 00:07:23.724 00:07:23.724 ' 00:07:23.724 15:00:24 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:23.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.724 --rc genhtml_branch_coverage=1 00:07:23.724 --rc genhtml_function_coverage=1 00:07:23.724 --rc genhtml_legend=1 00:07:23.724 --rc geninfo_all_blocks=1 00:07:23.724 --rc geninfo_unexecuted_blocks=1 00:07:23.724 00:07:23.724 ' 00:07:23.724 15:00:24 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:23.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.724 --rc genhtml_branch_coverage=1 00:07:23.724 --rc genhtml_function_coverage=1 00:07:23.724 --rc genhtml_legend=1 00:07:23.724 --rc geninfo_all_blocks=1 00:07:23.724 --rc geninfo_unexecuted_blocks=1 00:07:23.724 00:07:23.724 ' 00:07:23.724 15:00:24 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:23.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.724 --rc genhtml_branch_coverage=1 00:07:23.724 --rc genhtml_function_coverage=1 00:07:23.724 --rc genhtml_legend=1 00:07:23.724 --rc geninfo_all_blocks=1 00:07:23.724 --rc geninfo_unexecuted_blocks=1 00:07:23.724 00:07:23.724 ' 00:07:23.724 15:00:24 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:23.724 15:00:24 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60822 00:07:23.724 15:00:24 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:23.724 15:00:24 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60822 00:07:23.724 15:00:24 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 60822 ']' 00:07:23.724 15:00:24 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.724 15:00:24 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:23.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.724 15:00:24 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.724 15:00:24 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:23.724 15:00:24 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:23.983 [2024-11-20 15:00:24.566283] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:07:23.983 [2024-11-20 15:00:24.566469] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60822 ] 00:07:23.983 [2024-11-20 15:00:24.763655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.241 [2024-11-20 15:00:24.909274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.174 15:00:25 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:25.174 15:00:25 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:25.174 15:00:25 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:25.431 { 00:07:25.431 "version": "SPDK v25.01-pre git sha1 c1691a126", 00:07:25.431 "fields": { 00:07:25.431 "major": 25, 00:07:25.431 "minor": 1, 00:07:25.431 "patch": 0, 00:07:25.431 "suffix": "-pre", 00:07:25.431 "commit": "c1691a126" 00:07:25.431 } 00:07:25.431 } 00:07:25.431 15:00:26 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:25.431 15:00:26 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:25.431 15:00:26 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:25.431 15:00:26 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:25.431 15:00:26 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:25.431 15:00:26 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:25.431 15:00:26 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.431 15:00:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:25.431 15:00:26 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:25.431 15:00:26 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.431 15:00:26 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:25.431 15:00:26 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:25.431 15:00:26 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:25.431 15:00:26 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:25.431 15:00:26 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:25.431 15:00:26 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:25.431 15:00:26 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.431 15:00:26 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:25.431 15:00:26 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.431 15:00:26 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:25.431 15:00:26 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.431 15:00:26 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:25.431 15:00:26 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:25.431 15:00:26 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:25.689 request: 00:07:25.689 { 00:07:25.689 "method": "env_dpdk_get_mem_stats", 00:07:25.689 "req_id": 1 00:07:25.689 } 00:07:25.689 Got JSON-RPC error response 00:07:25.689 response: 00:07:25.689 { 00:07:25.689 "code": -32601, 00:07:25.689 "message": "Method not found" 00:07:25.689 } 00:07:25.689 15:00:26 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:25.689 15:00:26 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:25.689 15:00:26 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:25.689 15:00:26 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:25.689 15:00:26 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60822 00:07:25.689 15:00:26 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 60822 ']' 00:07:25.689 15:00:26 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 60822 00:07:25.689 15:00:26 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:25.689 15:00:26 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:25.689 15:00:26 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60822 00:07:25.689 15:00:26 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:25.689 killing process with pid 60822 00:07:25.689 15:00:26 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:25.689 15:00:26 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60822' 00:07:25.689 15:00:26 app_cmdline -- common/autotest_common.sh@973 -- # kill 60822 00:07:25.689 15:00:26 app_cmdline -- common/autotest_common.sh@978 -- # wait 60822 00:07:28.974 00:07:28.974 real 0m4.896s 00:07:28.974 user 0m4.889s 00:07:28.974 sys 0m0.851s 00:07:28.974 15:00:29 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.974 15:00:29 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:28.974 ************************************ 00:07:28.974 END TEST app_cmdline 00:07:28.974 ************************************ 00:07:28.974 15:00:29 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:28.974 15:00:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:28.974 15:00:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.974 15:00:29 -- common/autotest_common.sh@10 -- # set +x 00:07:28.974 ************************************ 00:07:28.974 START TEST version 00:07:28.974 ************************************ 00:07:28.974 15:00:29 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:28.974 * Looking for test storage... 00:07:28.974 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:28.974 15:00:29 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:28.974 15:00:29 version -- common/autotest_common.sh@1693 -- # lcov --version 00:07:28.974 15:00:29 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:28.974 15:00:29 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:28.974 15:00:29 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:28.974 15:00:29 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:28.974 15:00:29 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:28.974 15:00:29 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:28.974 15:00:29 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:28.974 15:00:29 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:28.974 15:00:29 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:28.974 15:00:29 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:28.974 15:00:29 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:28.974 15:00:29 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:28.974 15:00:29 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:28.974 15:00:29 version -- scripts/common.sh@344 -- # case "$op" in 00:07:28.974 15:00:29 version -- scripts/common.sh@345 -- # : 1 00:07:28.974 15:00:29 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:28.974 15:00:29 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:28.974 15:00:29 version -- scripts/common.sh@365 -- # decimal 1 00:07:28.974 15:00:29 version -- scripts/common.sh@353 -- # local d=1 00:07:28.974 15:00:29 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:28.974 15:00:29 version -- scripts/common.sh@355 -- # echo 1 00:07:28.974 15:00:29 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:28.974 15:00:29 version -- scripts/common.sh@366 -- # decimal 2 00:07:28.974 15:00:29 version -- scripts/common.sh@353 -- # local d=2 00:07:28.974 15:00:29 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:28.974 15:00:29 version -- scripts/common.sh@355 -- # echo 2 00:07:28.974 15:00:29 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:28.974 15:00:29 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:28.974 15:00:29 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:28.974 15:00:29 version -- scripts/common.sh@368 -- # return 0 00:07:28.974 15:00:29 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:28.974 15:00:29 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:28.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.974 --rc genhtml_branch_coverage=1 00:07:28.974 --rc genhtml_function_coverage=1 00:07:28.974 --rc genhtml_legend=1 00:07:28.974 --rc geninfo_all_blocks=1 00:07:28.974 --rc geninfo_unexecuted_blocks=1 00:07:28.974 00:07:28.974 ' 00:07:28.974 15:00:29 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:28.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.974 --rc genhtml_branch_coverage=1 00:07:28.974 --rc genhtml_function_coverage=1 00:07:28.974 --rc genhtml_legend=1 00:07:28.974 --rc geninfo_all_blocks=1 00:07:28.974 --rc geninfo_unexecuted_blocks=1 00:07:28.974 00:07:28.974 ' 00:07:28.974 15:00:29 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:28.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.974 --rc genhtml_branch_coverage=1 00:07:28.974 --rc genhtml_function_coverage=1 00:07:28.974 --rc genhtml_legend=1 00:07:28.974 --rc geninfo_all_blocks=1 00:07:28.974 --rc geninfo_unexecuted_blocks=1 00:07:28.974 00:07:28.974 ' 00:07:28.974 15:00:29 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:28.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.974 --rc genhtml_branch_coverage=1 00:07:28.974 --rc genhtml_function_coverage=1 00:07:28.974 --rc genhtml_legend=1 00:07:28.974 --rc geninfo_all_blocks=1 00:07:28.974 --rc geninfo_unexecuted_blocks=1 00:07:28.974 00:07:28.974 ' 00:07:28.975 15:00:29 version -- app/version.sh@17 -- # get_header_version major 00:07:28.975 15:00:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:28.975 15:00:29 version -- app/version.sh@14 -- # cut -f2 00:07:28.975 15:00:29 version -- app/version.sh@14 -- # tr -d '"' 00:07:28.975 15:00:29 version -- app/version.sh@17 -- # major=25 00:07:28.975 15:00:29 version -- app/version.sh@18 -- # get_header_version minor 00:07:28.975 15:00:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:28.975 15:00:29 version -- app/version.sh@14 -- # tr -d '"' 00:07:28.975 15:00:29 version -- app/version.sh@14 -- # cut -f2 00:07:28.975 15:00:29 version -- app/version.sh@18 -- # minor=1 00:07:28.975 15:00:29 version -- app/version.sh@19 -- # get_header_version patch 00:07:28.975 15:00:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:28.975 15:00:29 version -- app/version.sh@14 -- # cut -f2 00:07:28.975 15:00:29 version -- app/version.sh@14 -- # tr -d '"' 00:07:28.975 15:00:29 version -- app/version.sh@19 -- # patch=0 00:07:28.975 15:00:29 version -- app/version.sh@20 -- # get_header_version suffix 00:07:28.975 15:00:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:28.975 15:00:29 version -- app/version.sh@14 -- # cut -f2 00:07:28.975 15:00:29 version -- app/version.sh@14 -- # tr -d '"' 00:07:28.975 15:00:29 version -- app/version.sh@20 -- # suffix=-pre 00:07:28.975 15:00:29 version -- app/version.sh@22 -- # version=25.1 00:07:28.975 15:00:29 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:28.975 15:00:29 version -- app/version.sh@28 -- # version=25.1rc0 00:07:28.975 15:00:29 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:28.975 15:00:29 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:28.975 15:00:29 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:28.975 15:00:29 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:28.975 ************************************ 00:07:28.975 END TEST version 00:07:28.975 ************************************ 00:07:28.975 00:07:28.975 real 0m0.327s 00:07:28.975 user 0m0.196s 00:07:28.975 sys 0m0.198s 00:07:28.975 15:00:29 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.975 15:00:29 version -- common/autotest_common.sh@10 -- # set +x 00:07:28.975 15:00:29 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:28.975 15:00:29 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:28.975 15:00:29 -- spdk/autotest.sh@194 -- # uname -s 00:07:28.975 15:00:29 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:28.975 15:00:29 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:28.975 15:00:29 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:28.975 15:00:29 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:07:28.975 15:00:29 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:28.975 15:00:29 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:28.975 15:00:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.975 15:00:29 -- common/autotest_common.sh@10 -- # set +x 00:07:28.975 ************************************ 00:07:28.975 START TEST blockdev_nvme 00:07:28.975 ************************************ 00:07:28.975 15:00:29 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:28.975 * Looking for test storage... 00:07:28.975 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:28.975 15:00:29 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:28.975 15:00:29 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:07:28.975 15:00:29 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:28.975 15:00:29 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:28.975 15:00:29 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:28.975 15:00:29 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:28.975 15:00:29 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:28.975 15:00:29 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:28.975 15:00:29 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:28.975 15:00:29 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:28.975 15:00:29 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:28.975 15:00:29 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:28.975 15:00:29 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:28.975 15:00:29 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:28.975 15:00:29 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:28.975 15:00:29 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:28.975 15:00:29 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:07:28.975 15:00:29 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:28.975 15:00:29 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:28.975 15:00:29 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:07:28.975 15:00:29 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:07:28.975 15:00:29 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:28.975 15:00:29 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:07:28.975 15:00:29 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:29.236 15:00:29 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:07:29.236 15:00:29 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:07:29.236 15:00:29 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:29.236 15:00:29 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:07:29.236 15:00:29 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:29.236 15:00:29 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:29.236 15:00:29 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:29.236 15:00:29 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:07:29.236 15:00:29 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:29.236 15:00:29 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:29.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.236 --rc genhtml_branch_coverage=1 00:07:29.236 --rc genhtml_function_coverage=1 00:07:29.236 --rc genhtml_legend=1 00:07:29.236 --rc geninfo_all_blocks=1 00:07:29.236 --rc geninfo_unexecuted_blocks=1 00:07:29.236 00:07:29.236 ' 00:07:29.236 15:00:29 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:29.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.236 --rc genhtml_branch_coverage=1 00:07:29.236 --rc genhtml_function_coverage=1 00:07:29.236 --rc genhtml_legend=1 00:07:29.236 --rc geninfo_all_blocks=1 00:07:29.236 --rc geninfo_unexecuted_blocks=1 00:07:29.236 00:07:29.236 ' 00:07:29.236 15:00:29 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:29.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.236 --rc genhtml_branch_coverage=1 00:07:29.236 --rc genhtml_function_coverage=1 00:07:29.236 --rc genhtml_legend=1 00:07:29.236 --rc geninfo_all_blocks=1 00:07:29.236 --rc geninfo_unexecuted_blocks=1 00:07:29.236 00:07:29.236 ' 00:07:29.236 15:00:29 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:29.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.236 --rc genhtml_branch_coverage=1 00:07:29.236 --rc genhtml_function_coverage=1 00:07:29.236 --rc genhtml_legend=1 00:07:29.236 --rc geninfo_all_blocks=1 00:07:29.236 --rc geninfo_unexecuted_blocks=1 00:07:29.236 00:07:29.236 ' 00:07:29.236 15:00:29 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:29.236 15:00:29 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:07:29.236 15:00:29 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:29.236 15:00:29 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:29.236 15:00:29 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:29.236 15:00:29 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:29.236 15:00:29 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:29.236 15:00:29 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:29.236 15:00:29 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:07:29.236 15:00:29 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:07:29.236 15:00:29 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:07:29.236 15:00:29 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:07:29.236 15:00:29 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:07:29.236 15:00:29 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:07:29.236 15:00:29 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:07:29.236 15:00:29 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:07:29.236 15:00:29 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:07:29.236 15:00:29 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:07:29.236 15:00:29 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:07:29.236 15:00:29 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:07:29.236 15:00:29 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:07:29.236 15:00:29 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:07:29.236 15:00:29 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:07:29.236 15:00:29 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:07:29.237 15:00:29 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61016 00:07:29.237 15:00:29 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:29.237 15:00:29 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:29.237 15:00:29 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 61016 00:07:29.237 15:00:29 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 61016 ']' 00:07:29.237 15:00:29 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.237 15:00:29 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:29.237 15:00:29 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.237 15:00:29 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:29.237 15:00:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:29.237 [2024-11-20 15:00:29.961268] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:07:29.237 [2024-11-20 15:00:29.961627] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61016 ] 00:07:29.495 [2024-11-20 15:00:30.151757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.495 [2024-11-20 15:00:30.309448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.899 15:00:31 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:30.899 15:00:31 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:07:30.899 15:00:31 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:07:30.899 15:00:31 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:07:30.899 15:00:31 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:07:30.899 15:00:31 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:30.899 15:00:31 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:30.899 15:00:31 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:30.899 15:00:31 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.899 15:00:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:31.157 15:00:31 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.157 15:00:31 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:31.157 15:00:31 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.157 15:00:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:31.158 15:00:31 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.158 15:00:31 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:07:31.158 15:00:31 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:31.158 15:00:31 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.158 15:00:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:31.158 15:00:31 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.158 15:00:31 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:31.158 15:00:31 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.158 15:00:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:31.158 15:00:31 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.158 15:00:31 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:31.158 15:00:31 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.158 15:00:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:31.158 15:00:31 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.158 15:00:31 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:31.158 15:00:31 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:31.158 15:00:31 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:31.158 15:00:31 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.158 15:00:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:31.158 15:00:31 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.158 15:00:31 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:31.417 15:00:31 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:31.417 15:00:31 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "9240b545-9a31-45c0-a48e-f6993420d8e8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "9240b545-9a31-45c0-a48e-f6993420d8e8",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "d170f54b-382e-4602-ad5a-1abd15898504"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "d170f54b-382e-4602-ad5a-1abd15898504",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "fcf0d450-9505-45b6-95e3-330698038865"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "fcf0d450-9505-45b6-95e3-330698038865",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "a8b207f6-5fe3-452d-9018-5022e5c4383e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a8b207f6-5fe3-452d-9018-5022e5c4383e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "e1d3c0b7-e75f-4192-ab84-a24404da5a7c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e1d3c0b7-e75f-4192-ab84-a24404da5a7c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "6971fd6d-e419-4659-be1a-7115ea7e12cc"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "6971fd6d-e419-4659-be1a-7115ea7e12cc",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:31.417 15:00:32 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:31.417 15:00:32 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:31.417 15:00:32 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:31.417 15:00:32 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 61016 00:07:31.417 15:00:32 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 61016 ']' 00:07:31.417 15:00:32 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 61016 00:07:31.417 15:00:32 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:07:31.417 15:00:32 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:31.417 15:00:32 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61016 00:07:31.417 15:00:32 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:31.417 15:00:32 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:31.418 killing process with pid 61016 00:07:31.418 15:00:32 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61016' 00:07:31.418 15:00:32 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 61016 00:07:31.418 15:00:32 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 61016 00:07:33.952 15:00:34 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:33.952 15:00:34 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:33.952 15:00:34 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:07:33.952 15:00:34 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.952 15:00:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:33.952 ************************************ 00:07:33.952 START TEST bdev_hello_world 00:07:33.952 ************************************ 00:07:33.952 15:00:34 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:34.217 [2024-11-20 15:00:34.808939] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:07:34.217 [2024-11-20 15:00:34.809089] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61117 ] 00:07:34.217 [2024-11-20 15:00:34.995946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.474 [2024-11-20 15:00:35.139107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.408 [2024-11-20 15:00:35.889105] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:35.408 [2024-11-20 15:00:35.889181] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:35.408 [2024-11-20 15:00:35.889221] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:35.408 [2024-11-20 15:00:35.892677] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:35.408 [2024-11-20 15:00:35.893383] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:35.408 [2024-11-20 15:00:35.893421] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:35.408 [2024-11-20 15:00:35.893623] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:35.408 00:07:35.408 [2024-11-20 15:00:35.893657] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:36.342 00:07:36.342 real 0m2.423s 00:07:36.342 user 0m1.978s 00:07:36.342 sys 0m0.338s 00:07:36.342 15:00:37 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.342 ************************************ 00:07:36.342 END TEST bdev_hello_world 00:07:36.342 ************************************ 00:07:36.342 15:00:37 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:36.599 15:00:37 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:36.599 15:00:37 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:36.599 15:00:37 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.599 15:00:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:36.599 ************************************ 00:07:36.599 START TEST bdev_bounds 00:07:36.599 ************************************ 00:07:36.599 15:00:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:07:36.599 15:00:37 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61159 00:07:36.599 15:00:37 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:36.600 15:00:37 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:36.600 Process bdevio pid: 61159 00:07:36.600 15:00:37 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61159' 00:07:36.600 15:00:37 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61159 00:07:36.600 15:00:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61159 ']' 00:07:36.600 15:00:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.600 15:00:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:36.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.600 15:00:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.600 15:00:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:36.600 15:00:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:36.600 [2024-11-20 15:00:37.316143] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:07:36.600 [2024-11-20 15:00:37.316858] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61159 ] 00:07:36.857 [2024-11-20 15:00:37.505192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:36.857 [2024-11-20 15:00:37.658819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.857 [2024-11-20 15:00:37.659000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.857 [2024-11-20 15:00:37.659029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:37.790 15:00:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:37.790 15:00:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:07:37.790 15:00:38 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:37.790 I/O targets: 00:07:37.790 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:37.790 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:07:37.790 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:37.790 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:37.790 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:37.790 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:37.790 00:07:37.790 00:07:37.790 CUnit - A unit testing framework for C - Version 2.1-3 00:07:37.790 http://cunit.sourceforge.net/ 00:07:37.790 00:07:37.790 00:07:37.790 Suite: bdevio tests on: Nvme3n1 00:07:37.790 Test: blockdev write read block ...passed 00:07:37.791 Test: blockdev write zeroes read block ...passed 00:07:37.791 Test: blockdev write zeroes read no split ...passed 00:07:37.791 Test: blockdev write zeroes read split ...passed 00:07:37.791 Test: blockdev write zeroes read split partial ...passed 00:07:37.791 Test: blockdev reset ...[2024-11-20 15:00:38.619144] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:37.791 [2024-11-20 15:00:38.623365] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:07:37.791 passed 00:07:37.791 Test: blockdev write read 8 blocks ...passed 00:07:38.049 Test: blockdev write read size > 128k ...passed 00:07:38.049 Test: blockdev write read invalid size ...passed 00:07:38.049 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:38.049 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:38.049 Test: blockdev write read max offset ...passed 00:07:38.049 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:38.049 Test: blockdev writev readv 8 blocks ...passed 00:07:38.049 Test: blockdev writev readv 30 x 1block ...passed 00:07:38.049 Test: blockdev writev readv block ...passed 00:07:38.049 Test: blockdev writev readv size > 128k ...passed 00:07:38.049 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:38.049 Test: blockdev comparev and writev ...[2024-11-20 15:00:38.632168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b660a000 len:0x1000 00:07:38.049 [2024-11-20 15:00:38.632224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:38.049 passed 00:07:38.049 Test: blockdev nvme passthru rw ...passed 00:07:38.049 Test: blockdev nvme passthru vendor specific ...passed 00:07:38.049 Test: blockdev nvme admin passthru ...[2024-11-20 15:00:38.633363] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:38.049 [2024-11-20 15:00:38.633402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:38.049 passed 00:07:38.049 Test: blockdev copy ...passed 00:07:38.049 Suite: bdevio tests on: Nvme2n3 00:07:38.049 Test: blockdev write read block ...passed 00:07:38.049 Test: blockdev write zeroes read block ...passed 00:07:38.049 Test: blockdev write zeroes read no split ...passed 00:07:38.049 Test: blockdev write zeroes read split ...passed 00:07:38.049 Test: blockdev write zeroes read split partial ...passed 00:07:38.049 Test: blockdev reset ...[2024-11-20 15:00:38.715470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:38.049 [2024-11-20 15:00:38.719994] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:38.049 passed 00:07:38.049 Test: blockdev write read 8 blocks ...passed 00:07:38.049 Test: blockdev write read size > 128k ...passed 00:07:38.049 Test: blockdev write read invalid size ...passed 00:07:38.049 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:38.049 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:38.049 Test: blockdev write read max offset ...passed 00:07:38.049 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:38.049 Test: blockdev writev readv 8 blocks ...passed 00:07:38.049 Test: blockdev writev readv 30 x 1block ...passed 00:07:38.049 Test: blockdev writev readv block ...passed 00:07:38.049 Test: blockdev writev readv size > 128k ...passed 00:07:38.049 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:38.049 Test: blockdev comparev and writev ...[2024-11-20 15:00:38.728740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x299006000 len:0x1000 00:07:38.049 [2024-11-20 15:00:38.728792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:38.049 passed 00:07:38.049 Test: blockdev nvme passthru rw ...passed 00:07:38.049 Test: blockdev nvme passthru vendor specific ...passed 00:07:38.049 Test: blockdev nvme admin passthru ...[2024-11-20 15:00:38.729783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:38.049 [2024-11-20 15:00:38.729829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:38.049 passed 00:07:38.049 Test: blockdev copy ...passed 00:07:38.049 Suite: bdevio tests on: Nvme2n2 00:07:38.049 Test: blockdev write read block ...passed 00:07:38.049 Test: blockdev write zeroes read block ...passed 00:07:38.049 Test: blockdev write zeroes read no split ...passed 00:07:38.049 Test: blockdev write zeroes read split ...passed 00:07:38.049 Test: blockdev write zeroes read split partial ...passed 00:07:38.049 Test: blockdev reset ...[2024-11-20 15:00:38.817435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:38.049 [2024-11-20 15:00:38.822118] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:38.049 passed 00:07:38.049 Test: blockdev write read 8 blocks ...passed 00:07:38.049 Test: blockdev write read size > 128k ...passed 00:07:38.049 Test: blockdev write read invalid size ...passed 00:07:38.049 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:38.049 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:38.049 Test: blockdev write read max offset ...passed 00:07:38.049 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:38.049 Test: blockdev writev readv 8 blocks ...passed 00:07:38.049 Test: blockdev writev readv 30 x 1block ...passed 00:07:38.049 Test: blockdev writev readv block ...passed 00:07:38.049 Test: blockdev writev readv size > 128k ...passed 00:07:38.049 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:38.049 Test: blockdev comparev and writev ...[2024-11-20 15:00:38.830796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c663c000 len:0x1000 00:07:38.049 [2024-11-20 15:00:38.830874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:38.049 passed 00:07:38.049 Test: blockdev nvme passthru rw ...passed 00:07:38.049 Test: blockdev nvme passthru vendor specific ...[2024-11-20 15:00:38.831850] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:38.049 [2024-11-20 15:00:38.831889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:38.049 passed 00:07:38.049 Test: blockdev nvme admin passthru ...passed 00:07:38.049 Test: blockdev copy ...passed 00:07:38.049 Suite: bdevio tests on: Nvme2n1 00:07:38.049 Test: blockdev write read block ...passed 00:07:38.049 Test: blockdev write zeroes read block ...passed 00:07:38.049 Test: blockdev write zeroes read no split ...passed 00:07:38.309 Test: blockdev write zeroes read split ...passed 00:07:38.309 Test: blockdev write zeroes read split partial ...passed 00:07:38.309 Test: blockdev reset ...[2024-11-20 15:00:38.918017] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:38.309 [2024-11-20 15:00:38.922887] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:38.309 passed 00:07:38.309 Test: blockdev write read 8 blocks ...passed 00:07:38.309 Test: blockdev write read size > 128k ...passed 00:07:38.309 Test: blockdev write read invalid size ...passed 00:07:38.309 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:38.309 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:38.309 Test: blockdev write read max offset ...passed 00:07:38.309 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:38.309 Test: blockdev writev readv 8 blocks ...passed 00:07:38.309 Test: blockdev writev readv 30 x 1block ...passed 00:07:38.309 Test: blockdev writev readv block ...passed 00:07:38.309 Test: blockdev writev readv size > 128k ...passed 00:07:38.309 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:38.309 Test: blockdev comparev and writev ...[2024-11-20 15:00:38.931686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c6638000 len:0x1000 00:07:38.309 [2024-11-20 15:00:38.931775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:38.309 passed 00:07:38.309 Test: blockdev nvme passthru rw ...passed 00:07:38.309 Test: blockdev nvme passthru vendor specific ...passed 00:07:38.309 Test: blockdev nvme admin passthru ...[2024-11-20 15:00:38.932690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:38.309 [2024-11-20 15:00:38.932739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:38.309 passed 00:07:38.309 Test: blockdev copy ...passed 00:07:38.309 Suite: bdevio tests on: Nvme1n1 00:07:38.309 Test: blockdev write read block ...passed 00:07:38.309 Test: blockdev write zeroes read block ...passed 00:07:38.309 Test: blockdev write zeroes read no split ...passed 00:07:38.309 Test: blockdev write zeroes read split ...passed 00:07:38.309 Test: blockdev write zeroes read split partial ...passed 00:07:38.309 Test: blockdev reset ...[2024-11-20 15:00:39.018474] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:38.309 [2024-11-20 15:00:39.022664] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:38.309 passed 00:07:38.309 Test: blockdev write read 8 blocks ...passed 00:07:38.309 Test: blockdev write read size > 128k ...passed 00:07:38.309 Test: blockdev write read invalid size ...passed 00:07:38.309 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:38.309 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:38.310 Test: blockdev write read max offset ...passed 00:07:38.310 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:38.310 Test: blockdev writev readv 8 blocks ...passed 00:07:38.310 Test: blockdev writev readv 30 x 1block ...passed 00:07:38.310 Test: blockdev writev readv block ...passed 00:07:38.310 Test: blockdev writev readv size > 128k ...passed 00:07:38.310 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:38.310 Test: blockdev comparev and writev ...[2024-11-20 15:00:39.030743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c6634000 len:0x1000 00:07:38.310 [2024-11-20 15:00:39.030805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:38.310 passed 00:07:38.310 Test: blockdev nvme passthru rw ...passed 00:07:38.310 Test: blockdev nvme passthru vendor specific ...passed 00:07:38.310 Test: blockdev nvme admin passthru ...[2024-11-20 15:00:39.031750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:38.310 [2024-11-20 15:00:39.031791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:38.310 passed 00:07:38.310 Test: blockdev copy ...passed 00:07:38.310 Suite: bdevio tests on: Nvme0n1 00:07:38.310 Test: blockdev write read block ...passed 00:07:38.310 Test: blockdev write zeroes read block ...passed 00:07:38.310 Test: blockdev write zeroes read no split ...passed 00:07:38.310 Test: blockdev write zeroes read split ...passed 00:07:38.310 Test: blockdev write zeroes read split partial ...passed 00:07:38.310 Test: blockdev reset ...[2024-11-20 15:00:39.113055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:38.310 [2024-11-20 15:00:39.117266] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:07:38.310 passed 00:07:38.310 Test: blockdev write read 8 blocks ...passed 00:07:38.310 Test: blockdev write read size > 128k ...passed 00:07:38.310 Test: blockdev write read invalid size ...passed 00:07:38.310 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:38.310 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:38.310 Test: blockdev write read max offset ...passed 00:07:38.310 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:38.310 Test: blockdev writev readv 8 blocks ...passed 00:07:38.310 Test: blockdev writev readv 30 x 1block ...passed 00:07:38.310 Test: blockdev writev readv block ...passed 00:07:38.310 Test: blockdev writev readv size > 128k ...passed 00:07:38.310 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:38.310 Test: blockdev comparev and writev ...passed 00:07:38.310 Test: blockdev nvme passthru rw ...[2024-11-20 15:00:39.124531] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:38.310 separate metadata which is not supported yet. 00:07:38.310 passed 00:07:38.310 Test: blockdev nvme passthru vendor specific ...passed 00:07:38.310 Test: blockdev nvme admin passthru ...[2024-11-20 15:00:39.125338] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:38.310 [2024-11-20 15:00:39.125385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:38.310 passed 00:07:38.310 Test: blockdev copy ...passed 00:07:38.310 00:07:38.310 Run Summary: Type Total Ran Passed Failed Inactive 00:07:38.310 suites 6 6 n/a 0 0 00:07:38.310 tests 138 138 138 0 0 00:07:38.310 asserts 893 893 893 0 n/a 00:07:38.310 00:07:38.310 Elapsed time = 1.594 seconds 00:07:38.310 0 00:07:38.577 15:00:39 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61159 00:07:38.577 15:00:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61159 ']' 00:07:38.577 15:00:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61159 00:07:38.577 15:00:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:07:38.577 15:00:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:38.577 15:00:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61159 00:07:38.577 killing process with pid 61159 00:07:38.577 15:00:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:38.577 15:00:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:38.577 15:00:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61159' 00:07:38.577 15:00:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61159 00:07:38.577 15:00:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61159 00:07:39.953 15:00:40 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:39.953 00:07:39.953 real 0m3.163s 00:07:39.953 user 0m8.014s 00:07:39.953 sys 0m0.558s 00:07:39.953 15:00:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.953 15:00:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:39.953 ************************************ 00:07:39.953 END TEST bdev_bounds 00:07:39.953 ************************************ 00:07:39.953 15:00:40 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:39.953 15:00:40 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:39.953 15:00:40 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.953 15:00:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:39.953 ************************************ 00:07:39.953 START TEST bdev_nbd 00:07:39.953 ************************************ 00:07:39.953 15:00:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:39.953 15:00:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:39.953 15:00:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:39.953 15:00:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:39.953 15:00:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:39.953 15:00:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:39.953 15:00:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:39.953 15:00:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:07:39.953 15:00:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:39.953 15:00:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:39.953 15:00:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:39.953 15:00:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:07:39.953 15:00:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:39.953 15:00:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:39.953 15:00:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:39.953 15:00:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:39.954 15:00:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61234 00:07:39.954 15:00:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:39.954 15:00:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:39.954 15:00:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61234 /var/tmp/spdk-nbd.sock 00:07:39.954 15:00:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61234 ']' 00:07:39.954 15:00:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:39.954 15:00:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:39.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:39.954 15:00:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:39.954 15:00:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:39.954 15:00:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:39.954 [2024-11-20 15:00:40.559038] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:07:39.954 [2024-11-20 15:00:40.559192] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:39.954 [2024-11-20 15:00:40.741944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.211 [2024-11-20 15:00:40.883531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.144 15:00:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:41.144 15:00:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:07:41.145 15:00:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:41.145 15:00:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:41.145 15:00:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:41.145 15:00:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:41.145 15:00:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:41.145 15:00:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:41.145 15:00:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:41.145 15:00:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:41.145 15:00:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:41.145 15:00:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:41.145 15:00:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:41.145 15:00:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:41.145 15:00:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:41.145 15:00:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:41.145 15:00:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:41.145 15:00:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:41.145 15:00:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:41.145 15:00:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:41.145 15:00:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:41.145 15:00:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:41.145 15:00:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:41.145 15:00:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:41.145 15:00:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:41.145 15:00:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:41.145 15:00:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:41.145 1+0 records in 00:07:41.145 1+0 records out 00:07:41.145 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000607957 s, 6.7 MB/s 00:07:41.145 15:00:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:41.145 15:00:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:41.145 15:00:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:41.145 15:00:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:41.145 15:00:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:41.145 15:00:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:41.145 15:00:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:41.145 15:00:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:07:41.403 15:00:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:41.403 15:00:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:41.403 15:00:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:41.403 15:00:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:41.403 15:00:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:41.403 15:00:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:41.403 15:00:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:41.403 15:00:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:41.403 15:00:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:41.403 15:00:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:41.403 15:00:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:41.403 15:00:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:41.403 1+0 records in 00:07:41.403 1+0 records out 00:07:41.403 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00063977 s, 6.4 MB/s 00:07:41.403 15:00:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:41.403 15:00:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:41.403 15:00:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:41.403 15:00:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:41.403 15:00:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:41.403 15:00:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:41.403 15:00:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:41.403 15:00:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:41.661 15:00:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:41.661 15:00:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:41.661 15:00:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:41.661 15:00:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:07:41.661 15:00:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:41.661 15:00:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:41.661 15:00:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:41.661 15:00:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:07:41.661 15:00:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:41.661 15:00:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:41.661 15:00:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:41.661 15:00:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:41.661 1+0 records in 00:07:41.661 1+0 records out 00:07:41.661 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000714537 s, 5.7 MB/s 00:07:41.662 15:00:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:41.662 15:00:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:41.662 15:00:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:41.662 15:00:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:41.662 15:00:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:41.662 15:00:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:41.662 15:00:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:41.662 15:00:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:41.920 15:00:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:41.920 15:00:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:41.920 15:00:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:41.920 15:00:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:07:41.920 15:00:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:41.920 15:00:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:41.920 15:00:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:41.920 15:00:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:07:41.920 15:00:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:41.920 15:00:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:41.920 15:00:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:41.920 15:00:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:41.920 1+0 records in 00:07:41.920 1+0 records out 00:07:41.920 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00075336 s, 5.4 MB/s 00:07:41.920 15:00:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:42.179 15:00:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:42.179 15:00:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:42.179 15:00:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:42.179 15:00:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:42.179 15:00:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:42.179 15:00:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:42.179 15:00:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:42.179 15:00:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:42.179 15:00:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:42.179 15:00:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:42.179 15:00:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:07:42.179 15:00:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:42.179 15:00:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:42.179 15:00:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:42.179 15:00:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:07:42.179 15:00:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:42.179 15:00:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:42.179 15:00:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:42.437 15:00:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:42.437 1+0 records in 00:07:42.437 1+0 records out 00:07:42.437 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00080333 s, 5.1 MB/s 00:07:42.437 15:00:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:42.437 15:00:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:42.437 15:00:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:42.437 15:00:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:42.437 15:00:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:42.437 15:00:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:42.437 15:00:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:42.437 15:00:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:42.695 15:00:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:42.695 15:00:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:42.695 15:00:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:42.695 15:00:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:07:42.695 15:00:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:42.695 15:00:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:42.695 15:00:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:42.695 15:00:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:07:42.695 15:00:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:42.695 15:00:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:42.695 15:00:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:42.695 15:00:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:42.695 1+0 records in 00:07:42.695 1+0 records out 00:07:42.695 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000746345 s, 5.5 MB/s 00:07:42.695 15:00:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:42.695 15:00:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:42.695 15:00:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:42.695 15:00:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:42.695 15:00:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:42.695 15:00:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:42.695 15:00:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:42.695 15:00:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:42.953 15:00:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:42.953 { 00:07:42.953 "nbd_device": "/dev/nbd0", 00:07:42.953 "bdev_name": "Nvme0n1" 00:07:42.953 }, 00:07:42.953 { 00:07:42.953 "nbd_device": "/dev/nbd1", 00:07:42.953 "bdev_name": "Nvme1n1" 00:07:42.953 }, 00:07:42.953 { 00:07:42.953 "nbd_device": "/dev/nbd2", 00:07:42.953 "bdev_name": "Nvme2n1" 00:07:42.953 }, 00:07:42.953 { 00:07:42.953 "nbd_device": "/dev/nbd3", 00:07:42.953 "bdev_name": "Nvme2n2" 00:07:42.953 }, 00:07:42.953 { 00:07:42.953 "nbd_device": "/dev/nbd4", 00:07:42.953 "bdev_name": "Nvme2n3" 00:07:42.953 }, 00:07:42.953 { 00:07:42.953 "nbd_device": "/dev/nbd5", 00:07:42.953 "bdev_name": "Nvme3n1" 00:07:42.953 } 00:07:42.953 ]' 00:07:42.953 15:00:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:42.953 15:00:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:42.953 { 00:07:42.953 "nbd_device": "/dev/nbd0", 00:07:42.953 "bdev_name": "Nvme0n1" 00:07:42.953 }, 00:07:42.953 { 00:07:42.953 "nbd_device": "/dev/nbd1", 00:07:42.953 "bdev_name": "Nvme1n1" 00:07:42.953 }, 00:07:42.953 { 00:07:42.953 "nbd_device": "/dev/nbd2", 00:07:42.953 "bdev_name": "Nvme2n1" 00:07:42.953 }, 00:07:42.953 { 00:07:42.953 "nbd_device": "/dev/nbd3", 00:07:42.953 "bdev_name": "Nvme2n2" 00:07:42.953 }, 00:07:42.953 { 00:07:42.953 "nbd_device": "/dev/nbd4", 00:07:42.953 "bdev_name": "Nvme2n3" 00:07:42.953 }, 00:07:42.953 { 00:07:42.953 "nbd_device": "/dev/nbd5", 00:07:42.953 "bdev_name": "Nvme3n1" 00:07:42.953 } 00:07:42.953 ]' 00:07:42.953 15:00:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:42.953 15:00:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:07:42.953 15:00:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:42.953 15:00:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:07:42.953 15:00:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:42.953 15:00:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:42.953 15:00:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:42.953 15:00:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:43.212 15:00:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:43.212 15:00:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:43.212 15:00:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:43.212 15:00:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:43.212 15:00:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:43.212 15:00:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:43.212 15:00:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:43.212 15:00:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:43.212 15:00:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:43.212 15:00:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:43.212 15:00:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:43.471 15:00:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:43.471 15:00:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:43.471 15:00:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:43.471 15:00:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:43.471 15:00:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:43.471 15:00:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:43.471 15:00:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:43.471 15:00:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:43.471 15:00:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:43.471 15:00:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:43.471 15:00:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:43.471 15:00:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:43.471 15:00:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:43.472 15:00:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:43.472 15:00:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:43.472 15:00:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:43.472 15:00:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:43.472 15:00:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:43.472 15:00:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:43.731 15:00:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:43.731 15:00:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:43.731 15:00:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:43.731 15:00:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:43.731 15:00:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:43.731 15:00:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:43.731 15:00:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:43.731 15:00:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:43.731 15:00:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:43.731 15:00:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:43.990 15:00:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:43.990 15:00:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:43.990 15:00:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:43.990 15:00:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:43.990 15:00:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:43.990 15:00:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:43.990 15:00:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:43.990 15:00:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:43.990 15:00:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:43.990 15:00:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:44.248 15:00:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:44.248 15:00:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:44.248 15:00:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:44.248 15:00:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:44.248 15:00:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:44.248 15:00:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:44.248 15:00:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:44.248 15:00:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:44.248 15:00:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:44.248 15:00:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:44.248 15:00:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:44.506 15:00:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:44.506 15:00:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:44.506 15:00:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:44.506 15:00:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:44.506 15:00:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:44.506 15:00:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:44.506 15:00:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:44.506 15:00:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:44.506 15:00:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:44.506 15:00:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:44.506 15:00:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:44.506 15:00:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:44.506 15:00:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:44.506 15:00:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:44.506 15:00:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:44.506 15:00:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:44.506 15:00:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:44.506 15:00:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:44.506 15:00:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:44.506 15:00:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:44.506 15:00:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:44.507 15:00:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:44.507 15:00:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:44.507 15:00:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:44.507 15:00:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:44.507 15:00:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:44.507 15:00:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:44.507 15:00:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:44.765 /dev/nbd0 00:07:44.765 15:00:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:44.765 15:00:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:44.765 15:00:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:44.765 15:00:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:44.765 15:00:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:44.765 15:00:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:44.765 15:00:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:44.765 15:00:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:44.765 15:00:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:44.765 15:00:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:44.765 15:00:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:44.765 1+0 records in 00:07:44.765 1+0 records out 00:07:44.765 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000481119 s, 8.5 MB/s 00:07:44.765 15:00:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.765 15:00:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:44.765 15:00:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.765 15:00:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:44.765 15:00:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:44.765 15:00:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:44.765 15:00:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:44.765 15:00:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:07:45.023 /dev/nbd1 00:07:45.023 15:00:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:45.023 15:00:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:45.023 15:00:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:45.023 15:00:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:45.023 15:00:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:45.023 15:00:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:45.023 15:00:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:45.023 15:00:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:45.023 15:00:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:45.023 15:00:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:45.023 15:00:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:45.023 1+0 records in 00:07:45.023 1+0 records out 00:07:45.023 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00046355 s, 8.8 MB/s 00:07:45.023 15:00:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.023 15:00:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:45.023 15:00:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.023 15:00:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:45.023 15:00:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:45.023 15:00:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:45.023 15:00:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:45.023 15:00:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:07:45.282 /dev/nbd10 00:07:45.282 15:00:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:45.282 15:00:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:45.282 15:00:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:07:45.282 15:00:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:45.282 15:00:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:45.282 15:00:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:45.282 15:00:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:07:45.282 15:00:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:45.282 15:00:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:45.282 15:00:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:45.282 15:00:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:45.282 1+0 records in 00:07:45.282 1+0 records out 00:07:45.282 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000503129 s, 8.1 MB/s 00:07:45.282 15:00:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.282 15:00:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:45.282 15:00:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.282 15:00:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:45.282 15:00:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:45.282 15:00:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:45.282 15:00:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:45.282 15:00:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:07:45.540 /dev/nbd11 00:07:45.540 15:00:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:45.540 15:00:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:45.540 15:00:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:07:45.540 15:00:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:45.540 15:00:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:45.540 15:00:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:45.540 15:00:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:07:45.540 15:00:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:45.540 15:00:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:45.540 15:00:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:45.540 15:00:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:45.540 1+0 records in 00:07:45.540 1+0 records out 00:07:45.540 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000834236 s, 4.9 MB/s 00:07:45.540 15:00:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.540 15:00:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:45.540 15:00:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.540 15:00:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:45.540 15:00:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:45.541 15:00:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:45.541 15:00:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:45.541 15:00:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:07:45.800 /dev/nbd12 00:07:45.800 15:00:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:45.800 15:00:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:45.800 15:00:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:07:45.800 15:00:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:45.800 15:00:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:45.800 15:00:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:45.800 15:00:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:07:45.800 15:00:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:45.800 15:00:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:45.800 15:00:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:45.800 15:00:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:45.800 1+0 records in 00:07:45.800 1+0 records out 00:07:45.800 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000916544 s, 4.5 MB/s 00:07:45.800 15:00:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.800 15:00:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:45.800 15:00:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.800 15:00:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:45.800 15:00:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:45.800 15:00:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:45.800 15:00:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:45.800 15:00:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:07:46.059 /dev/nbd13 00:07:46.059 15:00:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:46.059 15:00:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:46.059 15:00:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:07:46.059 15:00:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:46.059 15:00:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:46.059 15:00:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:46.059 15:00:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:07:46.059 15:00:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:46.059 15:00:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:46.059 15:00:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:46.059 15:00:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:46.059 1+0 records in 00:07:46.059 1+0 records out 00:07:46.059 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000554711 s, 7.4 MB/s 00:07:46.059 15:00:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:46.059 15:00:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:46.059 15:00:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:46.059 15:00:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:46.059 15:00:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:46.059 15:00:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:46.059 15:00:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:46.059 15:00:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:46.059 15:00:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:46.060 15:00:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:46.319 15:00:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:46.319 { 00:07:46.319 "nbd_device": "/dev/nbd0", 00:07:46.319 "bdev_name": "Nvme0n1" 00:07:46.319 }, 00:07:46.319 { 00:07:46.319 "nbd_device": "/dev/nbd1", 00:07:46.319 "bdev_name": "Nvme1n1" 00:07:46.319 }, 00:07:46.319 { 00:07:46.319 "nbd_device": "/dev/nbd10", 00:07:46.319 "bdev_name": "Nvme2n1" 00:07:46.319 }, 00:07:46.319 { 00:07:46.319 "nbd_device": "/dev/nbd11", 00:07:46.319 "bdev_name": "Nvme2n2" 00:07:46.319 }, 00:07:46.319 { 00:07:46.319 "nbd_device": "/dev/nbd12", 00:07:46.319 "bdev_name": "Nvme2n3" 00:07:46.319 }, 00:07:46.319 { 00:07:46.319 "nbd_device": "/dev/nbd13", 00:07:46.319 "bdev_name": "Nvme3n1" 00:07:46.319 } 00:07:46.319 ]' 00:07:46.319 15:00:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:46.319 { 00:07:46.319 "nbd_device": "/dev/nbd0", 00:07:46.319 "bdev_name": "Nvme0n1" 00:07:46.319 }, 00:07:46.319 { 00:07:46.319 "nbd_device": "/dev/nbd1", 00:07:46.319 "bdev_name": "Nvme1n1" 00:07:46.319 }, 00:07:46.319 { 00:07:46.319 "nbd_device": "/dev/nbd10", 00:07:46.319 "bdev_name": "Nvme2n1" 00:07:46.319 }, 00:07:46.319 { 00:07:46.319 "nbd_device": "/dev/nbd11", 00:07:46.319 "bdev_name": "Nvme2n2" 00:07:46.319 }, 00:07:46.319 { 00:07:46.319 "nbd_device": "/dev/nbd12", 00:07:46.319 "bdev_name": "Nvme2n3" 00:07:46.319 }, 00:07:46.319 { 00:07:46.319 "nbd_device": "/dev/nbd13", 00:07:46.319 "bdev_name": "Nvme3n1" 00:07:46.319 } 00:07:46.319 ]' 00:07:46.319 15:00:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:46.319 15:00:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:46.319 /dev/nbd1 00:07:46.319 /dev/nbd10 00:07:46.319 /dev/nbd11 00:07:46.319 /dev/nbd12 00:07:46.319 /dev/nbd13' 00:07:46.319 15:00:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:46.319 /dev/nbd1 00:07:46.319 /dev/nbd10 00:07:46.319 /dev/nbd11 00:07:46.319 /dev/nbd12 00:07:46.319 /dev/nbd13' 00:07:46.319 15:00:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:46.319 15:00:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:07:46.319 15:00:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:07:46.319 15:00:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:07:46.319 15:00:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:07:46.319 15:00:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:07:46.319 15:00:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:46.319 15:00:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:46.319 15:00:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:46.319 15:00:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:46.319 15:00:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:46.319 15:00:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:46.319 256+0 records in 00:07:46.319 256+0 records out 00:07:46.319 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119552 s, 87.7 MB/s 00:07:46.319 15:00:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:46.319 15:00:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:46.579 256+0 records in 00:07:46.579 256+0 records out 00:07:46.579 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.12665 s, 8.3 MB/s 00:07:46.579 15:00:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:46.579 15:00:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:46.836 256+0 records in 00:07:46.836 256+0 records out 00:07:46.836 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.12961 s, 8.1 MB/s 00:07:46.836 15:00:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:46.836 15:00:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:46.836 256+0 records in 00:07:46.836 256+0 records out 00:07:46.836 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.130433 s, 8.0 MB/s 00:07:46.836 15:00:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:46.836 15:00:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:47.093 256+0 records in 00:07:47.093 256+0 records out 00:07:47.093 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.126874 s, 8.3 MB/s 00:07:47.093 15:00:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:47.093 15:00:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:47.093 256+0 records in 00:07:47.093 256+0 records out 00:07:47.093 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.131704 s, 8.0 MB/s 00:07:47.093 15:00:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:47.093 15:00:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:47.352 256+0 records in 00:07:47.352 256+0 records out 00:07:47.352 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.128798 s, 8.1 MB/s 00:07:47.352 15:00:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:07:47.352 15:00:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:47.352 15:00:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:47.352 15:00:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:47.352 15:00:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:47.352 15:00:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:47.352 15:00:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:47.352 15:00:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:47.352 15:00:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:47.352 15:00:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:47.352 15:00:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:47.352 15:00:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:47.352 15:00:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:47.352 15:00:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:47.352 15:00:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:47.352 15:00:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:47.352 15:00:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:47.352 15:00:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:47.352 15:00:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:47.352 15:00:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:47.352 15:00:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:47.352 15:00:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:47.352 15:00:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:47.352 15:00:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:47.352 15:00:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:47.352 15:00:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:47.352 15:00:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:47.611 15:00:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:47.611 15:00:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:47.611 15:00:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:47.611 15:00:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:47.611 15:00:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:47.611 15:00:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:47.611 15:00:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:47.611 15:00:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:47.611 15:00:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:47.611 15:00:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:47.869 15:00:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:47.869 15:00:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:47.869 15:00:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:47.869 15:00:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:47.869 15:00:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:47.869 15:00:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:47.869 15:00:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:47.869 15:00:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:47.869 15:00:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:47.869 15:00:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:48.128 15:00:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:48.128 15:00:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:48.128 15:00:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:48.128 15:00:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:48.128 15:00:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:48.128 15:00:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:48.128 15:00:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:48.128 15:00:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:48.128 15:00:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:48.128 15:00:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:48.386 15:00:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:48.386 15:00:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:48.386 15:00:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:48.386 15:00:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:48.386 15:00:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:48.386 15:00:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:48.386 15:00:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:48.386 15:00:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:48.386 15:00:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:48.386 15:00:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:48.644 15:00:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:48.644 15:00:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:48.644 15:00:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:48.644 15:00:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:48.644 15:00:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:48.644 15:00:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:48.644 15:00:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:48.644 15:00:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:48.644 15:00:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:48.644 15:00:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:48.901 15:00:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:48.901 15:00:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:48.901 15:00:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:48.901 15:00:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:48.901 15:00:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:48.902 15:00:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:48.902 15:00:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:48.902 15:00:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:48.902 15:00:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:48.902 15:00:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:48.902 15:00:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:48.902 15:00:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:49.159 15:00:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:49.159 15:00:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:49.159 15:00:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:49.159 15:00:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:49.159 15:00:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:49.159 15:00:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:49.159 15:00:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:49.159 15:00:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:49.159 15:00:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:49.159 15:00:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:49.159 15:00:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:49.159 15:00:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:49.159 15:00:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:49.159 15:00:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:49.159 15:00:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:49.417 malloc_lvol_verify 00:07:49.417 15:00:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:49.675 f6061bec-8d01-41b4-aed2-64cf7f1973a6 00:07:49.675 15:00:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:49.675 ad47c142-35b5-4d52-a4b8-689b975f649a 00:07:49.675 15:00:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:49.933 /dev/nbd0 00:07:49.933 15:00:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:49.933 15:00:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:49.933 15:00:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:49.933 15:00:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:49.933 15:00:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:49.933 mke2fs 1.47.0 (5-Feb-2023) 00:07:49.933 Discarding device blocks: 0/4096 done 00:07:49.933 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:49.933 00:07:49.933 Allocating group tables: 0/1 done 00:07:49.933 Writing inode tables: 0/1 done 00:07:49.933 Creating journal (1024 blocks): done 00:07:49.933 Writing superblocks and filesystem accounting information: 0/1 done 00:07:49.933 00:07:49.933 15:00:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:49.933 15:00:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:49.933 15:00:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:49.933 15:00:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:49.933 15:00:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:49.933 15:00:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:49.933 15:00:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:50.190 15:00:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:50.191 15:00:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:50.191 15:00:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:50.191 15:00:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:50.191 15:00:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:50.191 15:00:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:50.191 15:00:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:50.191 15:00:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:50.191 15:00:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61234 00:07:50.191 15:00:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61234 ']' 00:07:50.191 15:00:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61234 00:07:50.191 15:00:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:50.191 15:00:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:50.191 15:00:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61234 00:07:50.191 killing process with pid 61234 00:07:50.191 15:00:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:50.191 15:00:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:50.191 15:00:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61234' 00:07:50.191 15:00:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61234 00:07:50.191 15:00:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61234 00:07:51.601 ************************************ 00:07:51.601 END TEST bdev_nbd 00:07:51.601 ************************************ 00:07:51.601 15:00:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:51.601 00:07:51.601 real 0m11.899s 00:07:51.601 user 0m15.273s 00:07:51.601 sys 0m5.031s 00:07:51.601 15:00:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:51.601 15:00:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:51.601 15:00:52 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:51.601 15:00:52 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:07:51.601 skipping fio tests on NVMe due to multi-ns failures. 00:07:51.601 15:00:52 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:51.601 15:00:52 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:51.601 15:00:52 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:51.601 15:00:52 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:51.601 15:00:52 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:51.601 15:00:52 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:51.601 ************************************ 00:07:51.601 START TEST bdev_verify 00:07:51.601 ************************************ 00:07:51.601 15:00:52 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:51.859 [2024-11-20 15:00:52.514911] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:07:51.859 [2024-11-20 15:00:52.515062] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61623 ] 00:07:52.118 [2024-11-20 15:00:52.702978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:52.118 [2024-11-20 15:00:52.851115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.118 [2024-11-20 15:00:52.851148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.054 Running I/O for 5 seconds... 00:07:55.365 18880.00 IOPS, 73.75 MiB/s [2024-11-20T15:00:57.143Z] 19264.00 IOPS, 75.25 MiB/s [2024-11-20T15:00:58.077Z] 19349.33 IOPS, 75.58 MiB/s [2024-11-20T15:00:59.014Z] 19216.00 IOPS, 75.06 MiB/s [2024-11-20T15:00:59.014Z] 19174.40 IOPS, 74.90 MiB/s 00:07:58.178 Latency(us) 00:07:58.178 [2024-11-20T15:00:59.014Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:58.178 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:58.178 Verification LBA range: start 0x0 length 0xbd0bd 00:07:58.178 Nvme0n1 : 5.06 1579.77 6.17 0.00 0.00 80582.86 13265.12 80011.82 00:07:58.178 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:58.178 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:58.178 Nvme0n1 : 5.07 1578.68 6.17 0.00 0.00 80694.23 11843.86 82538.51 00:07:58.178 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:58.178 Verification LBA range: start 0x0 length 0xa0000 00:07:58.178 Nvme1n1 : 5.08 1587.07 6.20 0.00 0.00 80293.33 12107.05 78327.36 00:07:58.178 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:58.178 Verification LBA range: start 0xa0000 length 0xa0000 00:07:58.178 Nvme1n1 : 5.07 1577.31 6.16 0.00 0.00 80575.82 13212.48 75379.56 00:07:58.178 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:58.178 Verification LBA range: start 0x0 length 0x80000 00:07:58.178 Nvme2n1 : 5.08 1586.58 6.20 0.00 0.00 80132.73 12528.17 77906.25 00:07:58.178 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:58.178 Verification LBA range: start 0x80000 length 0x80000 00:07:58.178 Nvme2n1 : 5.08 1585.93 6.20 0.00 0.00 80113.86 9738.28 73273.99 00:07:58.178 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:58.178 Verification LBA range: start 0x0 length 0x80000 00:07:58.178 Nvme2n2 : 5.09 1585.73 6.19 0.00 0.00 80031.30 14212.63 75379.56 00:07:58.178 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:58.178 Verification LBA range: start 0x80000 length 0x80000 00:07:58.178 Nvme2n2 : 5.09 1585.07 6.19 0.00 0.00 79996.84 11422.74 72852.87 00:07:58.178 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:58.178 Verification LBA range: start 0x0 length 0x80000 00:07:58.178 Nvme2n3 : 5.09 1585.31 6.19 0.00 0.00 79915.36 14107.35 71168.41 00:07:58.178 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:58.178 Verification LBA range: start 0x80000 length 0x80000 00:07:58.178 Nvme2n3 : 5.09 1584.58 6.19 0.00 0.00 79857.33 11580.66 74116.22 00:07:58.178 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:58.178 Verification LBA range: start 0x0 length 0x20000 00:07:58.178 Nvme3n1 : 5.09 1584.92 6.19 0.00 0.00 79779.31 14317.91 78748.48 00:07:58.178 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:58.178 Verification LBA range: start 0x20000 length 0x20000 00:07:58.178 Nvme3n1 : 5.09 1584.21 6.19 0.00 0.00 79731.21 11528.02 74537.33 00:07:58.178 [2024-11-20T15:00:59.014Z] =================================================================================================================== 00:07:58.178 [2024-11-20T15:00:59.014Z] Total : 19005.16 74.24 0.00 0.00 80141.07 9738.28 82538.51 00:07:59.555 00:07:59.555 real 0m7.920s 00:07:59.555 user 0m14.452s 00:07:59.555 sys 0m0.427s 00:07:59.555 15:01:00 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.555 ************************************ 00:07:59.555 END TEST bdev_verify 00:07:59.555 ************************************ 00:07:59.555 15:01:00 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:59.814 15:01:00 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:59.814 15:01:00 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:59.814 15:01:00 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.814 15:01:00 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:59.814 ************************************ 00:07:59.814 START TEST bdev_verify_big_io 00:07:59.814 ************************************ 00:07:59.814 15:01:00 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:59.814 [2024-11-20 15:01:00.523915] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:07:59.814 [2024-11-20 15:01:00.524107] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61727 ] 00:08:00.073 [2024-11-20 15:01:00.728532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:00.073 [2024-11-20 15:01:00.879243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.073 [2024-11-20 15:01:00.879280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:01.008 Running I/O for 5 seconds... 00:08:04.923 1696.00 IOPS, 106.00 MiB/s [2024-11-20T15:01:07.660Z] 2420.50 IOPS, 151.28 MiB/s [2024-11-20T15:01:07.660Z] 2713.67 IOPS, 169.60 MiB/s [2024-11-20T15:01:07.660Z] 2758.50 IOPS, 172.41 MiB/s 00:08:06.824 Latency(us) 00:08:06.824 [2024-11-20T15:01:07.660Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:06.824 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:06.824 Verification LBA range: start 0x0 length 0xbd0b 00:08:06.824 Nvme0n1 : 5.62 155.50 9.72 0.00 0.00 796211.84 22950.76 862443.23 00:08:06.824 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:06.824 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:06.824 Nvme0n1 : 5.63 155.26 9.70 0.00 0.00 797328.95 18423.78 889394.58 00:08:06.824 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:06.824 Verification LBA range: start 0x0 length 0xa000 00:08:06.824 Nvme1n1 : 5.62 155.27 9.70 0.00 0.00 776963.19 69483.95 717579.72 00:08:06.824 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:06.824 Verification LBA range: start 0xa000 length 0xa000 00:08:06.824 Nvme1n1 : 5.63 155.22 9.70 0.00 0.00 775761.03 88855.24 724317.56 00:08:06.824 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:06.824 Verification LBA range: start 0x0 length 0x8000 00:08:06.824 Nvme2n1 : 5.69 157.40 9.84 0.00 0.00 747468.77 72431.76 838860.80 00:08:06.824 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:06.824 Verification LBA range: start 0x8000 length 0x8000 00:08:06.824 Nvme2n1 : 5.63 159.06 9.94 0.00 0.00 743222.14 57271.62 801802.69 00:08:06.824 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:06.824 Verification LBA range: start 0x0 length 0x8000 00:08:06.824 Nvme2n2 : 5.69 161.56 10.10 0.00 0.00 713121.85 67378.38 842229.72 00:08:06.824 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:06.824 Verification LBA range: start 0x8000 length 0x8000 00:08:06.824 Nvme2n2 : 5.73 161.03 10.06 0.00 0.00 714241.17 61061.65 1071316.20 00:08:06.824 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:06.824 Verification LBA range: start 0x0 length 0x8000 00:08:06.824 Nvme2n3 : 5.75 174.12 10.88 0.00 0.00 649149.50 19160.73 801802.69 00:08:06.824 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:06.824 Verification LBA range: start 0x8000 length 0x8000 00:08:06.824 Nvme2n3 : 5.76 164.00 10.25 0.00 0.00 685318.20 21055.74 1502537.82 00:08:06.824 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:06.824 Verification LBA range: start 0x0 length 0x2000 00:08:06.824 Nvme3n1 : 5.76 181.64 11.35 0.00 0.00 606475.31 1342.30 811909.45 00:08:06.824 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:06.824 Verification LBA range: start 0x2000 length 0x2000 00:08:06.824 Nvme3n1 : 5.78 180.34 11.27 0.00 0.00 608538.35 2947.80 1536227.01 00:08:06.824 [2024-11-20T15:01:07.660Z] =================================================================================================================== 00:08:06.824 [2024-11-20T15:01:07.660Z] Total : 1960.39 122.52 0.00 0.00 713531.33 1342.30 1536227.01 00:08:09.359 00:08:09.359 real 0m9.167s 00:08:09.359 user 0m16.924s 00:08:09.359 sys 0m0.440s 00:08:09.359 ************************************ 00:08:09.359 END TEST bdev_verify_big_io 00:08:09.359 ************************************ 00:08:09.359 15:01:09 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:09.359 15:01:09 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:09.359 15:01:09 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:09.359 15:01:09 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:09.359 15:01:09 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:09.359 15:01:09 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:09.359 ************************************ 00:08:09.359 START TEST bdev_write_zeroes 00:08:09.359 ************************************ 00:08:09.359 15:01:09 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:09.359 [2024-11-20 15:01:09.746890] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:08:09.359 [2024-11-20 15:01:09.747053] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61847 ] 00:08:09.359 [2024-11-20 15:01:09.940458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.359 [2024-11-20 15:01:10.087859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.295 Running I/O for 1 seconds... 00:08:11.233 70976.00 IOPS, 277.25 MiB/s 00:08:11.233 Latency(us) 00:08:11.233 [2024-11-20T15:01:12.069Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:11.233 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:11.233 Nvme0n1 : 1.02 11810.40 46.13 0.00 0.00 10809.83 8685.49 29056.93 00:08:11.233 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:11.233 Nvme1n1 : 1.02 11798.13 46.09 0.00 0.00 10805.74 9001.33 30109.71 00:08:11.233 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:11.233 Nvme2n1 : 1.02 11786.48 46.04 0.00 0.00 10772.51 8527.58 30320.27 00:08:11.233 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:11.233 Nvme2n2 : 1.02 11827.32 46.20 0.00 0.00 10698.70 5737.69 25056.33 00:08:11.233 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:11.233 Nvme2n3 : 1.02 11816.62 46.16 0.00 0.00 10668.20 5895.61 23371.87 00:08:11.233 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:11.233 Nvme3n1 : 1.02 11743.75 45.87 0.00 0.00 10708.88 6079.85 28214.70 00:08:11.233 [2024-11-20T15:01:12.069Z] =================================================================================================================== 00:08:11.233 [2024-11-20T15:01:12.069Z] Total : 70782.69 276.49 0.00 0.00 10743.87 5737.69 30320.27 00:08:12.610 00:08:12.610 real 0m3.530s 00:08:12.610 user 0m3.075s 00:08:12.610 sys 0m0.339s 00:08:12.610 15:01:13 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:12.610 15:01:13 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:12.610 ************************************ 00:08:12.610 END TEST bdev_write_zeroes 00:08:12.610 ************************************ 00:08:12.610 15:01:13 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:12.610 15:01:13 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:12.610 15:01:13 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:12.611 15:01:13 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:12.611 ************************************ 00:08:12.611 START TEST bdev_json_nonenclosed 00:08:12.611 ************************************ 00:08:12.611 15:01:13 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:12.611 [2024-11-20 15:01:13.350571] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:08:12.611 [2024-11-20 15:01:13.350754] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61900 ] 00:08:12.869 [2024-11-20 15:01:13.541391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.869 [2024-11-20 15:01:13.685974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.869 [2024-11-20 15:01:13.686094] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:12.869 [2024-11-20 15:01:13.686119] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:12.869 [2024-11-20 15:01:13.686133] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:13.436 00:08:13.436 real 0m0.725s 00:08:13.436 user 0m0.459s 00:08:13.436 sys 0m0.160s 00:08:13.436 15:01:13 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:13.436 15:01:13 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:13.436 ************************************ 00:08:13.436 END TEST bdev_json_nonenclosed 00:08:13.436 ************************************ 00:08:13.436 15:01:14 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:13.436 15:01:14 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:13.436 15:01:14 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:13.436 15:01:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:13.436 ************************************ 00:08:13.436 START TEST bdev_json_nonarray 00:08:13.436 ************************************ 00:08:13.436 15:01:14 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:13.436 [2024-11-20 15:01:14.157884] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:08:13.436 [2024-11-20 15:01:14.158046] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61931 ] 00:08:13.695 [2024-11-20 15:01:14.344974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.695 [2024-11-20 15:01:14.489531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.695 [2024-11-20 15:01:14.489677] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:13.695 [2024-11-20 15:01:14.489704] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:13.695 [2024-11-20 15:01:14.489729] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:13.954 00:08:13.954 real 0m0.711s 00:08:13.954 user 0m0.451s 00:08:13.954 sys 0m0.156s 00:08:13.954 15:01:14 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:13.954 15:01:14 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:13.954 ************************************ 00:08:13.954 END TEST bdev_json_nonarray 00:08:13.954 ************************************ 00:08:14.213 15:01:14 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:08:14.213 15:01:14 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:08:14.213 15:01:14 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:08:14.213 15:01:14 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:08:14.213 15:01:14 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:08:14.213 15:01:14 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:14.213 15:01:14 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:14.213 15:01:14 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:08:14.213 15:01:14 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:08:14.213 15:01:14 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:08:14.213 15:01:14 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:08:14.213 00:08:14.213 real 0m45.253s 00:08:14.213 user 1m5.747s 00:08:14.213 sys 0m8.836s 00:08:14.213 15:01:14 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:14.213 ************************************ 00:08:14.213 END TEST blockdev_nvme 00:08:14.213 ************************************ 00:08:14.213 15:01:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:14.213 15:01:14 -- spdk/autotest.sh@209 -- # uname -s 00:08:14.213 15:01:14 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:08:14.213 15:01:14 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:14.213 15:01:14 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:14.213 15:01:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:14.213 15:01:14 -- common/autotest_common.sh@10 -- # set +x 00:08:14.213 ************************************ 00:08:14.213 START TEST blockdev_nvme_gpt 00:08:14.213 ************************************ 00:08:14.213 15:01:14 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:14.213 * Looking for test storage... 00:08:14.213 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:14.213 15:01:15 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:14.213 15:01:15 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:08:14.213 15:01:15 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:14.472 15:01:15 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:14.472 15:01:15 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:14.472 15:01:15 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:14.472 15:01:15 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:14.472 15:01:15 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:08:14.472 15:01:15 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:08:14.472 15:01:15 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:08:14.472 15:01:15 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:08:14.472 15:01:15 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:08:14.472 15:01:15 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:08:14.472 15:01:15 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:08:14.472 15:01:15 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:14.472 15:01:15 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:08:14.472 15:01:15 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:08:14.472 15:01:15 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:14.472 15:01:15 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:14.472 15:01:15 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:08:14.472 15:01:15 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:08:14.472 15:01:15 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:14.472 15:01:15 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:08:14.472 15:01:15 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:08:14.472 15:01:15 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:08:14.472 15:01:15 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:08:14.472 15:01:15 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:14.472 15:01:15 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:08:14.472 15:01:15 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:08:14.472 15:01:15 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:14.472 15:01:15 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:14.472 15:01:15 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:08:14.472 15:01:15 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:14.472 15:01:15 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:14.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.472 --rc genhtml_branch_coverage=1 00:08:14.472 --rc genhtml_function_coverage=1 00:08:14.472 --rc genhtml_legend=1 00:08:14.472 --rc geninfo_all_blocks=1 00:08:14.472 --rc geninfo_unexecuted_blocks=1 00:08:14.472 00:08:14.472 ' 00:08:14.472 15:01:15 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:14.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.472 --rc genhtml_branch_coverage=1 00:08:14.472 --rc genhtml_function_coverage=1 00:08:14.472 --rc genhtml_legend=1 00:08:14.472 --rc geninfo_all_blocks=1 00:08:14.472 --rc geninfo_unexecuted_blocks=1 00:08:14.472 00:08:14.472 ' 00:08:14.472 15:01:15 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:14.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.472 --rc genhtml_branch_coverage=1 00:08:14.472 --rc genhtml_function_coverage=1 00:08:14.472 --rc genhtml_legend=1 00:08:14.472 --rc geninfo_all_blocks=1 00:08:14.472 --rc geninfo_unexecuted_blocks=1 00:08:14.472 00:08:14.472 ' 00:08:14.473 15:01:15 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:14.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.473 --rc genhtml_branch_coverage=1 00:08:14.473 --rc genhtml_function_coverage=1 00:08:14.473 --rc genhtml_legend=1 00:08:14.473 --rc geninfo_all_blocks=1 00:08:14.473 --rc geninfo_unexecuted_blocks=1 00:08:14.473 00:08:14.473 ' 00:08:14.473 15:01:15 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:14.473 15:01:15 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:08:14.473 15:01:15 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:14.473 15:01:15 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:14.473 15:01:15 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:14.473 15:01:15 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:14.473 15:01:15 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:08:14.473 15:01:15 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:08:14.473 15:01:15 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:08:14.473 15:01:15 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:08:14.473 15:01:15 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:08:14.473 15:01:15 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:08:14.473 15:01:15 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:08:14.473 15:01:15 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:08:14.473 15:01:15 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:08:14.473 15:01:15 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:08:14.473 15:01:15 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:08:14.473 15:01:15 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:08:14.473 15:01:15 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:08:14.473 15:01:15 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:08:14.473 15:01:15 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:08:14.473 15:01:15 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:08:14.473 15:01:15 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:08:14.473 15:01:15 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:08:14.473 15:01:15 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62015 00:08:14.473 15:01:15 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:14.473 15:01:15 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:14.473 15:01:15 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 62015 00:08:14.473 15:01:15 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 62015 ']' 00:08:14.473 15:01:15 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.473 15:01:15 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:14.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.473 15:01:15 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.473 15:01:15 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:14.473 15:01:15 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:14.473 [2024-11-20 15:01:15.270807] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:08:14.473 [2024-11-20 15:01:15.270964] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62015 ] 00:08:14.732 [2024-11-20 15:01:15.456838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.992 [2024-11-20 15:01:15.603789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.962 15:01:16 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:15.962 15:01:16 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:08:15.962 15:01:16 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:08:15.962 15:01:16 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:08:15.962 15:01:16 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:16.529 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:16.788 Waiting for block devices as requested 00:08:16.788 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:16.788 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:17.047 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:17.047 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:22.319 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:22.319 15:01:22 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:08:22.319 15:01:22 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:08:22.319 15:01:22 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:08:22.319 15:01:22 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:08:22.319 15:01:22 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:22.319 15:01:22 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:08:22.319 15:01:22 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:08:22.319 15:01:22 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:22.319 15:01:22 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:22.319 15:01:22 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:22.319 15:01:22 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:08:22.319 15:01:22 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:08:22.319 15:01:22 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:22.319 15:01:22 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:22.319 15:01:22 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:22.319 15:01:22 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:08:22.319 15:01:22 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:08:22.319 15:01:22 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:08:22.319 15:01:22 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:22.319 15:01:22 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:22.319 15:01:22 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:08:22.319 15:01:22 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:08:22.319 15:01:22 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:08:22.319 15:01:22 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:22.319 15:01:22 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:22.319 15:01:22 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:08:22.319 15:01:22 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:08:22.319 15:01:22 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:08:22.319 15:01:22 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:22.319 15:01:22 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:22.319 15:01:22 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:08:22.319 15:01:22 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:08:22.319 15:01:22 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:08:22.319 15:01:22 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:22.319 15:01:22 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:22.319 15:01:22 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:08:22.319 15:01:22 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:08:22.319 15:01:22 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:08:22.319 15:01:22 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:22.319 15:01:22 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:08:22.319 15:01:22 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:08:22.319 15:01:22 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:08:22.319 15:01:22 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:08:22.319 15:01:22 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:08:22.319 15:01:22 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:08:22.319 15:01:22 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:08:22.320 15:01:22 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:08:22.320 BYT; 00:08:22.320 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:08:22.320 15:01:22 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:08:22.320 BYT; 00:08:22.320 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:08:22.320 15:01:22 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:08:22.320 15:01:22 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:08:22.320 15:01:22 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:08:22.320 15:01:22 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:08:22.320 15:01:22 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:22.320 15:01:22 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:08:22.320 15:01:22 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:08:22.320 15:01:22 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:08:22.320 15:01:22 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:22.320 15:01:22 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:22.320 15:01:22 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:08:22.320 15:01:22 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:08:22.320 15:01:22 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:22.320 15:01:22 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:08:22.320 15:01:22 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:22.320 15:01:22 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:22.320 15:01:22 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:22.320 15:01:22 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:08:22.320 15:01:22 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:08:22.320 15:01:22 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:22.320 15:01:23 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:22.320 15:01:23 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:08:22.320 15:01:23 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:08:22.320 15:01:23 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:22.320 15:01:23 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:08:22.320 15:01:23 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:22.320 15:01:23 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:22.320 15:01:23 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:22.320 15:01:23 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:08:23.258 The operation has completed successfully. 00:08:23.258 15:01:24 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:08:24.636 The operation has completed successfully. 00:08:24.636 15:01:25 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:25.203 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:25.770 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:26.029 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:26.029 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:26.029 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:26.029 15:01:26 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:08:26.029 15:01:26 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.029 15:01:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:26.029 [] 00:08:26.029 15:01:26 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.029 15:01:26 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:08:26.029 15:01:26 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:08:26.029 15:01:26 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:08:26.029 15:01:26 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:26.288 15:01:26 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:08:26.288 15:01:26 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.288 15:01:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:26.548 15:01:27 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.548 15:01:27 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:08:26.548 15:01:27 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.548 15:01:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:26.548 15:01:27 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.548 15:01:27 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:08:26.548 15:01:27 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:08:26.548 15:01:27 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.548 15:01:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:26.548 15:01:27 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.548 15:01:27 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:08:26.548 15:01:27 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.548 15:01:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:26.548 15:01:27 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.548 15:01:27 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:26.548 15:01:27 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.548 15:01:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:26.548 15:01:27 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.548 15:01:27 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:08:26.548 15:01:27 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:08:26.548 15:01:27 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.548 15:01:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:26.548 15:01:27 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:08:26.808 15:01:27 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.808 15:01:27 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:08:26.808 15:01:27 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:08:26.809 15:01:27 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "0ede19ea-00ba-49d2-8b8b-4252dfbc4069"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "0ede19ea-00ba-49d2-8b8b-4252dfbc4069",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "c1466153-8cb8-4c8e-85eb-15f050f893a1"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c1466153-8cb8-4c8e-85eb-15f050f893a1",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "10037b54-90a0-4a6a-93f5-98f28ff8d36a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "10037b54-90a0-4a6a-93f5-98f28ff8d36a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "cdd2d7e4-20ca-44ea-9119-8041ad4f0b0d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "cdd2d7e4-20ca-44ea-9119-8041ad4f0b0d",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "d7fa5a19-3695-4900-8067-584cfee13e2f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "d7fa5a19-3695-4900-8067-584cfee13e2f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:26.809 15:01:27 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:08:26.809 15:01:27 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:08:26.809 15:01:27 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:08:26.809 15:01:27 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 62015 00:08:26.809 15:01:27 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 62015 ']' 00:08:26.809 15:01:27 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 62015 00:08:26.809 15:01:27 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:08:26.809 15:01:27 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:26.809 15:01:27 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62015 00:08:26.809 15:01:27 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:26.809 15:01:27 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:26.809 killing process with pid 62015 00:08:26.809 15:01:27 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62015' 00:08:26.809 15:01:27 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 62015 00:08:26.809 15:01:27 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 62015 00:08:29.389 15:01:30 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:29.389 15:01:30 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:29.389 15:01:30 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:08:29.389 15:01:30 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.389 15:01:30 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:29.389 ************************************ 00:08:29.389 START TEST bdev_hello_world 00:08:29.389 ************************************ 00:08:29.389 15:01:30 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:29.648 [2024-11-20 15:01:30.248576] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:08:29.648 [2024-11-20 15:01:30.248744] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62667 ] 00:08:29.648 [2024-11-20 15:01:30.438111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.906 [2024-11-20 15:01:30.613248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.840 [2024-11-20 15:01:31.381073] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:30.840 [2024-11-20 15:01:31.381158] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:08:30.840 [2024-11-20 15:01:31.381212] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:30.840 [2024-11-20 15:01:31.384760] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:30.840 [2024-11-20 15:01:31.385469] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:30.840 [2024-11-20 15:01:31.385504] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:30.840 [2024-11-20 15:01:31.385738] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:30.840 00:08:30.840 [2024-11-20 15:01:31.385769] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:32.215 00:08:32.215 real 0m2.513s 00:08:32.215 user 0m2.030s 00:08:32.215 sys 0m0.372s 00:08:32.215 15:01:32 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:32.215 15:01:32 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:32.215 ************************************ 00:08:32.215 END TEST bdev_hello_world 00:08:32.215 ************************************ 00:08:32.215 15:01:32 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:08:32.215 15:01:32 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:32.215 15:01:32 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:32.215 15:01:32 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:32.215 ************************************ 00:08:32.215 START TEST bdev_bounds 00:08:32.215 ************************************ 00:08:32.215 15:01:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:08:32.215 15:01:32 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:32.215 15:01:32 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=62709 00:08:32.215 15:01:32 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:32.215 Process bdevio pid: 62709 00:08:32.215 15:01:32 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 62709' 00:08:32.215 15:01:32 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 62709 00:08:32.216 15:01:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 62709 ']' 00:08:32.216 15:01:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.216 15:01:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:32.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.216 15:01:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.216 15:01:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:32.216 15:01:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:32.216 [2024-11-20 15:01:32.820688] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:08:32.216 [2024-11-20 15:01:32.820853] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62709 ] 00:08:32.216 [2024-11-20 15:01:33.011060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:32.475 [2024-11-20 15:01:33.160601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.475 [2024-11-20 15:01:33.160789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.475 [2024-11-20 15:01:33.160818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:33.410 15:01:33 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:33.410 15:01:33 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:08:33.410 15:01:33 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:33.410 I/O targets: 00:08:33.410 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:08:33.410 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:08:33.410 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:08:33.410 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:33.410 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:33.410 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:33.410 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:33.410 00:08:33.410 00:08:33.410 CUnit - A unit testing framework for C - Version 2.1-3 00:08:33.410 http://cunit.sourceforge.net/ 00:08:33.410 00:08:33.410 00:08:33.410 Suite: bdevio tests on: Nvme3n1 00:08:33.410 Test: blockdev write read block ...passed 00:08:33.410 Test: blockdev write zeroes read block ...passed 00:08:33.410 Test: blockdev write zeroes read no split ...passed 00:08:33.410 Test: blockdev write zeroes read split ...passed 00:08:33.410 Test: blockdev write zeroes read split partial ...passed 00:08:33.410 Test: blockdev reset ...[2024-11-20 15:01:34.071514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:08:33.410 [2024-11-20 15:01:34.075698] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:08:33.410 passed 00:08:33.410 Test: blockdev write read 8 blocks ...passed 00:08:33.410 Test: blockdev write read size > 128k ...passed 00:08:33.410 Test: blockdev write read invalid size ...passed 00:08:33.410 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:33.410 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:33.410 Test: blockdev write read max offset ...passed 00:08:33.410 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:33.410 Test: blockdev writev readv 8 blocks ...passed 00:08:33.410 Test: blockdev writev readv 30 x 1block ...passed 00:08:33.410 Test: blockdev writev readv block ...passed 00:08:33.410 Test: blockdev writev readv size > 128k ...passed 00:08:33.410 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:33.410 Test: blockdev comparev and writev ...[2024-11-20 15:01:34.084048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b3e04000 len:0x1000 00:08:33.410 [2024-11-20 15:01:34.084102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:33.410 passed 00:08:33.410 Test: blockdev nvme passthru rw ...passed 00:08:33.410 Test: blockdev nvme passthru vendor specific ...passed 00:08:33.410 Test: blockdev nvme admin passthru ...[2024-11-20 15:01:34.084951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:33.410 [2024-11-20 15:01:34.084985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:33.410 passed 00:08:33.410 Test: blockdev copy ...passed 00:08:33.410 Suite: bdevio tests on: Nvme2n3 00:08:33.410 Test: blockdev write read block ...passed 00:08:33.410 Test: blockdev write zeroes read block ...passed 00:08:33.410 Test: blockdev write zeroes read no split ...passed 00:08:33.410 Test: blockdev write zeroes read split ...passed 00:08:33.410 Test: blockdev write zeroes read split partial ...passed 00:08:33.410 Test: blockdev reset ...[2024-11-20 15:01:34.171849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:33.410 passed 00:08:33.410 Test: blockdev write read 8 blocks ...[2024-11-20 15:01:34.176205] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:08:33.410 passed 00:08:33.410 Test: blockdev write read size > 128k ...passed 00:08:33.410 Test: blockdev write read invalid size ...passed 00:08:33.410 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:33.410 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:33.410 Test: blockdev write read max offset ...passed 00:08:33.410 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:33.410 Test: blockdev writev readv 8 blocks ...passed 00:08:33.410 Test: blockdev writev readv 30 x 1block ...passed 00:08:33.410 Test: blockdev writev readv block ...passed 00:08:33.410 Test: blockdev writev readv size > 128k ...passed 00:08:33.410 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:33.410 Test: blockdev comparev and writev ...[2024-11-20 15:01:34.184972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b3e02000 len:0x1000 00:08:33.410 [2024-11-20 15:01:34.185027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:33.410 passed 00:08:33.410 Test: blockdev nvme passthru rw ...passed 00:08:33.410 Test: blockdev nvme passthru vendor specific ...passed 00:08:33.410 Test: blockdev nvme admin passthru ...[2024-11-20 15:01:34.185991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:33.410 [2024-11-20 15:01:34.186030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:33.410 passed 00:08:33.410 Test: blockdev copy ...passed 00:08:33.410 Suite: bdevio tests on: Nvme2n2 00:08:33.410 Test: blockdev write read block ...passed 00:08:33.410 Test: blockdev write zeroes read block ...passed 00:08:33.410 Test: blockdev write zeroes read no split ...passed 00:08:33.410 Test: blockdev write zeroes read split ...passed 00:08:33.716 Test: blockdev write zeroes read split partial ...passed 00:08:33.716 Test: blockdev reset ...[2024-11-20 15:01:34.266341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:33.716 passed 00:08:33.716 Test: blockdev write read 8 blocks ...[2024-11-20 15:01:34.271019] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:08:33.716 passed 00:08:33.716 Test: blockdev write read size > 128k ...passed 00:08:33.716 Test: blockdev write read invalid size ...passed 00:08:33.716 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:33.716 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:33.716 Test: blockdev write read max offset ...passed 00:08:33.716 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:33.716 Test: blockdev writev readv 8 blocks ...passed 00:08:33.716 Test: blockdev writev readv 30 x 1block ...passed 00:08:33.716 Test: blockdev writev readv block ...passed 00:08:33.716 Test: blockdev writev readv size > 128k ...passed 00:08:33.716 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:33.716 Test: blockdev comparev and writev ...[2024-11-20 15:01:34.278785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c7c38000 len:0x1000 00:08:33.716 [2024-11-20 15:01:34.278841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:33.716 passed 00:08:33.716 Test: blockdev nvme passthru rw ...passed 00:08:33.716 Test: blockdev nvme passthru vendor specific ...[2024-11-20 15:01:34.279780] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:33.716 passed 00:08:33.716 Test: blockdev nvme admin passthru ...[2024-11-20 15:01:34.279819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:33.716 passed 00:08:33.716 Test: blockdev copy ...passed 00:08:33.716 Suite: bdevio tests on: Nvme2n1 00:08:33.716 Test: blockdev write read block ...passed 00:08:33.716 Test: blockdev write zeroes read block ...passed 00:08:33.716 Test: blockdev write zeroes read no split ...passed 00:08:33.716 Test: blockdev write zeroes read split ...passed 00:08:33.716 Test: blockdev write zeroes read split partial ...passed 00:08:33.716 Test: blockdev reset ...[2024-11-20 15:01:34.349637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:33.716 [2024-11-20 15:01:34.353834] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:08:33.716 passed 00:08:33.716 Test: blockdev write read 8 blocks ...passed 00:08:33.716 Test: blockdev write read size > 128k ...passed 00:08:33.716 Test: blockdev write read invalid size ...passed 00:08:33.716 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:33.716 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:33.716 Test: blockdev write read max offset ...passed 00:08:33.716 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:33.716 Test: blockdev writev readv 8 blocks ...passed 00:08:33.716 Test: blockdev writev readv 30 x 1block ...passed 00:08:33.716 Test: blockdev writev readv block ...passed 00:08:33.716 Test: blockdev writev readv size > 128k ...passed 00:08:33.716 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:33.716 Test: blockdev comparev and writev ...[2024-11-20 15:01:34.366742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c7c34000 len:0x1000 00:08:33.716 [2024-11-20 15:01:34.366811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:33.716 passed 00:08:33.716 Test: blockdev nvme passthru rw ...passed 00:08:33.716 Test: blockdev nvme passthru vendor specific ...passed 00:08:33.716 Test: blockdev nvme admin passthru ...[2024-11-20 15:01:34.367397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:33.716 [2024-11-20 15:01:34.367438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:33.716 passed 00:08:33.716 Test: blockdev copy ...passed 00:08:33.716 Suite: bdevio tests on: Nvme1n1p2 00:08:33.716 Test: blockdev write read block ...passed 00:08:33.716 Test: blockdev write zeroes read block ...passed 00:08:33.716 Test: blockdev write zeroes read no split ...passed 00:08:33.716 Test: blockdev write zeroes read split ...passed 00:08:33.716 Test: blockdev write zeroes read split partial ...passed 00:08:33.716 Test: blockdev reset ...[2024-11-20 15:01:34.431958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:08:33.716 passed 00:08:33.716 Test: blockdev write read 8 blocks ...[2024-11-20 15:01:34.435776] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:08:33.716 passed 00:08:33.716 Test: blockdev write read size > 128k ...passed 00:08:33.716 Test: blockdev write read invalid size ...passed 00:08:33.716 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:33.716 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:33.716 Test: blockdev write read max offset ...passed 00:08:33.716 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:33.716 Test: blockdev writev readv 8 blocks ...passed 00:08:33.716 Test: blockdev writev readv 30 x 1block ...passed 00:08:33.716 Test: blockdev writev readv block ...passed 00:08:33.716 Test: blockdev writev readv size > 128k ...passed 00:08:33.716 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:33.716 Test: blockdev comparev and writev ...[2024-11-20 15:01:34.446340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2c7c30000 len:0x1000 00:08:33.716 [2024-11-20 15:01:34.446401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:33.716 passed 00:08:33.716 Test: blockdev nvme passthru rw ...passed 00:08:33.716 Test: blockdev nvme passthru vendor specific ...passed 00:08:33.716 Test: blockdev nvme admin passthru ...passed 00:08:33.716 Test: blockdev copy ...passed 00:08:33.716 Suite: bdevio tests on: Nvme1n1p1 00:08:33.716 Test: blockdev write read block ...passed 00:08:33.716 Test: blockdev write zeroes read block ...passed 00:08:33.716 Test: blockdev write zeroes read no split ...passed 00:08:33.716 Test: blockdev write zeroes read split ...passed 00:08:33.716 Test: blockdev write zeroes read split partial ...passed 00:08:33.716 Test: blockdev reset ...[2024-11-20 15:01:34.507736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:08:33.716 [2024-11-20 15:01:34.511729] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:08:33.716 passed 00:08:33.716 Test: blockdev write read 8 blocks ...passed 00:08:33.716 Test: blockdev write read size > 128k ...passed 00:08:33.716 Test: blockdev write read invalid size ...passed 00:08:33.716 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:33.716 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:33.716 Test: blockdev write read max offset ...passed 00:08:33.716 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:33.716 Test: blockdev writev readv 8 blocks ...passed 00:08:33.716 Test: blockdev writev readv 30 x 1block ...passed 00:08:33.716 Test: blockdev writev readv block ...passed 00:08:33.716 Test: blockdev writev readv size > 128k ...passed 00:08:33.716 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:33.716 Test: blockdev comparev and writev ...[2024-11-20 15:01:34.519290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2b400e000 len:0x1000 00:08:33.716 [2024-11-20 15:01:34.519354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:33.716 passed 00:08:33.716 Test: blockdev nvme passthru rw ...passed 00:08:33.716 Test: blockdev nvme passthru vendor specific ...passed 00:08:33.716 Test: blockdev nvme admin passthru ...passed 00:08:33.716 Test: blockdev copy ...passed 00:08:33.716 Suite: bdevio tests on: Nvme0n1 00:08:33.716 Test: blockdev write read block ...passed 00:08:33.716 Test: blockdev write zeroes read block ...passed 00:08:33.716 Test: blockdev write zeroes read no split ...passed 00:08:33.973 Test: blockdev write zeroes read split ...passed 00:08:33.973 Test: blockdev write zeroes read split partial ...passed 00:08:33.973 Test: blockdev reset ...[2024-11-20 15:01:34.578678] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:33.973 [2024-11-20 15:01:34.582321] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:08:33.973 passed 00:08:33.973 Test: blockdev write read 8 blocks ...passed 00:08:33.973 Test: blockdev write read size > 128k ...passed 00:08:33.973 Test: blockdev write read invalid size ...passed 00:08:33.973 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:33.973 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:33.973 Test: blockdev write read max offset ...passed 00:08:33.973 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:33.973 Test: blockdev writev readv 8 blocks ...passed 00:08:33.973 Test: blockdev writev readv 30 x 1block ...passed 00:08:33.973 Test: blockdev writev readv block ...passed 00:08:33.973 Test: blockdev writev readv size > 128k ...passed 00:08:33.973 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:33.973 Test: blockdev comparev and writev ...passed 00:08:33.973 Test: blockdev nvme passthru rw ...[2024-11-20 15:01:34.588066] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:08:33.973 separate metadata which is not supported yet. 00:08:33.973 passed 00:08:33.973 Test: blockdev nvme passthru vendor specific ...[2024-11-20 15:01:34.588481] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:08:33.973 [2024-11-20 15:01:34.588543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:08:33.973 passed 00:08:33.973 Test: blockdev nvme admin passthru ...passed 00:08:33.973 Test: blockdev copy ...passed 00:08:33.973 00:08:33.973 Run Summary: Type Total Ran Passed Failed Inactive 00:08:33.973 suites 7 7 n/a 0 0 00:08:33.973 tests 161 161 161 0 0 00:08:33.973 asserts 1025 1025 1025 0 n/a 00:08:33.973 00:08:33.973 Elapsed time = 1.618 seconds 00:08:33.973 0 00:08:33.973 15:01:34 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 62709 00:08:33.973 15:01:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 62709 ']' 00:08:33.973 15:01:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 62709 00:08:33.973 15:01:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:08:33.973 15:01:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:33.973 15:01:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62709 00:08:33.973 15:01:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:33.973 15:01:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:33.973 15:01:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62709' 00:08:33.973 killing process with pid 62709 00:08:33.973 15:01:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 62709 00:08:33.973 15:01:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 62709 00:08:35.347 15:01:35 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:08:35.347 00:08:35.347 real 0m3.076s 00:08:35.347 user 0m7.732s 00:08:35.347 sys 0m0.530s 00:08:35.347 15:01:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.347 ************************************ 00:08:35.347 END TEST bdev_bounds 00:08:35.347 15:01:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:35.347 ************************************ 00:08:35.347 15:01:35 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:35.347 15:01:35 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:35.347 15:01:35 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:35.347 15:01:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:35.347 ************************************ 00:08:35.347 START TEST bdev_nbd 00:08:35.347 ************************************ 00:08:35.347 15:01:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:35.347 15:01:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:08:35.347 15:01:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:08:35.347 15:01:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:35.347 15:01:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:35.347 15:01:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:35.347 15:01:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:08:35.347 15:01:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:08:35.347 15:01:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:08:35.347 15:01:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:35.347 15:01:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:08:35.347 15:01:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:08:35.347 15:01:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:35.347 15:01:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:08:35.347 15:01:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:35.347 15:01:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:08:35.347 15:01:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=62774 00:08:35.347 15:01:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:35.347 15:01:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:35.347 15:01:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 62774 /var/tmp/spdk-nbd.sock 00:08:35.347 15:01:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 62774 ']' 00:08:35.347 15:01:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:35.347 15:01:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:35.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:35.347 15:01:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:35.347 15:01:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:35.347 15:01:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:35.347 [2024-11-20 15:01:35.995535] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:08:35.347 [2024-11-20 15:01:35.995729] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:35.606 [2024-11-20 15:01:36.193774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.606 [2024-11-20 15:01:36.357475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.540 15:01:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:36.540 15:01:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:08:36.540 15:01:37 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:36.540 15:01:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:36.540 15:01:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:36.540 15:01:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:36.540 15:01:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:36.540 15:01:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:36.540 15:01:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:36.540 15:01:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:36.540 15:01:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:08:36.540 15:01:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:36.540 15:01:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:36.540 15:01:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:36.540 15:01:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:08:36.799 15:01:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:36.799 15:01:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:36.799 15:01:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:36.799 15:01:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:36.799 15:01:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:36.799 15:01:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:36.799 15:01:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:36.799 15:01:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:36.799 15:01:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:36.799 15:01:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:36.799 15:01:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:36.799 15:01:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:36.799 1+0 records in 00:08:36.799 1+0 records out 00:08:36.799 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000794747 s, 5.2 MB/s 00:08:36.799 15:01:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:36.799 15:01:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:36.799 15:01:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:36.799 15:01:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:36.799 15:01:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:36.799 15:01:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:36.799 15:01:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:36.799 15:01:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:08:37.071 15:01:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:37.071 15:01:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:37.071 15:01:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:37.071 15:01:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:37.071 15:01:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:37.071 15:01:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:37.071 15:01:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:37.071 15:01:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:37.071 15:01:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:37.071 15:01:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:37.071 15:01:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:37.071 15:01:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:37.071 1+0 records in 00:08:37.071 1+0 records out 00:08:37.071 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000720468 s, 5.7 MB/s 00:08:37.071 15:01:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:37.071 15:01:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:37.071 15:01:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:37.071 15:01:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:37.071 15:01:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:37.071 15:01:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:37.071 15:01:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:37.071 15:01:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:08:37.354 15:01:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:37.354 15:01:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:37.354 15:01:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:37.354 15:01:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:08:37.354 15:01:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:37.354 15:01:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:37.354 15:01:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:37.354 15:01:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:08:37.354 15:01:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:37.354 15:01:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:37.354 15:01:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:37.354 15:01:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:37.354 1+0 records in 00:08:37.354 1+0 records out 00:08:37.354 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000860877 s, 4.8 MB/s 00:08:37.354 15:01:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:37.354 15:01:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:37.354 15:01:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:37.354 15:01:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:37.354 15:01:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:37.354 15:01:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:37.354 15:01:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:37.354 15:01:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:37.613 15:01:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:37.613 15:01:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:37.613 15:01:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:37.613 15:01:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:08:37.613 15:01:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:37.613 15:01:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:37.613 15:01:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:37.613 15:01:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:08:37.613 15:01:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:37.613 15:01:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:37.613 15:01:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:37.613 15:01:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:37.613 1+0 records in 00:08:37.613 1+0 records out 00:08:37.613 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000902207 s, 4.5 MB/s 00:08:37.613 15:01:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:37.613 15:01:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:37.613 15:01:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:37.613 15:01:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:37.613 15:01:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:37.613 15:01:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:37.613 15:01:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:37.613 15:01:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:37.871 15:01:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:38.127 15:01:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:38.127 15:01:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:38.127 15:01:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:08:38.127 15:01:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:38.127 15:01:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:38.127 15:01:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:38.127 15:01:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:08:38.127 15:01:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:38.127 15:01:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:38.127 15:01:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:38.127 15:01:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:38.127 1+0 records in 00:08:38.127 1+0 records out 00:08:38.127 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000754007 s, 5.4 MB/s 00:08:38.127 15:01:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:38.127 15:01:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:38.127 15:01:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:38.127 15:01:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:38.127 15:01:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:38.127 15:01:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:38.128 15:01:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:38.128 15:01:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:38.385 15:01:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:38.385 15:01:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:38.385 15:01:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:38.385 15:01:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:08:38.385 15:01:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:38.385 15:01:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:38.385 15:01:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:38.385 15:01:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:08:38.385 15:01:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:38.385 15:01:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:38.385 15:01:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:38.385 15:01:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:38.385 1+0 records in 00:08:38.385 1+0 records out 00:08:38.385 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00077444 s, 5.3 MB/s 00:08:38.385 15:01:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:38.385 15:01:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:38.385 15:01:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:38.385 15:01:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:38.385 15:01:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:38.385 15:01:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:38.385 15:01:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:38.385 15:01:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:38.643 15:01:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:08:38.643 15:01:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:08:38.643 15:01:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:08:38.643 15:01:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:08:38.643 15:01:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:38.643 15:01:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:38.643 15:01:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:38.643 15:01:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:08:38.643 15:01:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:38.643 15:01:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:38.643 15:01:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:38.643 15:01:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:38.643 1+0 records in 00:08:38.643 1+0 records out 00:08:38.643 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00065157 s, 6.3 MB/s 00:08:38.643 15:01:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:38.643 15:01:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:38.643 15:01:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:38.643 15:01:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:38.643 15:01:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:38.643 15:01:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:38.643 15:01:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:38.643 15:01:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:38.901 15:01:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:38.901 { 00:08:38.901 "nbd_device": "/dev/nbd0", 00:08:38.901 "bdev_name": "Nvme0n1" 00:08:38.901 }, 00:08:38.901 { 00:08:38.901 "nbd_device": "/dev/nbd1", 00:08:38.901 "bdev_name": "Nvme1n1p1" 00:08:38.901 }, 00:08:38.901 { 00:08:38.901 "nbd_device": "/dev/nbd2", 00:08:38.901 "bdev_name": "Nvme1n1p2" 00:08:38.901 }, 00:08:38.901 { 00:08:38.901 "nbd_device": "/dev/nbd3", 00:08:38.901 "bdev_name": "Nvme2n1" 00:08:38.901 }, 00:08:38.901 { 00:08:38.901 "nbd_device": "/dev/nbd4", 00:08:38.901 "bdev_name": "Nvme2n2" 00:08:38.901 }, 00:08:38.901 { 00:08:38.901 "nbd_device": "/dev/nbd5", 00:08:38.901 "bdev_name": "Nvme2n3" 00:08:38.901 }, 00:08:38.901 { 00:08:38.901 "nbd_device": "/dev/nbd6", 00:08:38.901 "bdev_name": "Nvme3n1" 00:08:38.901 } 00:08:38.901 ]' 00:08:38.901 15:01:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:38.901 15:01:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:38.901 { 00:08:38.901 "nbd_device": "/dev/nbd0", 00:08:38.901 "bdev_name": "Nvme0n1" 00:08:38.901 }, 00:08:38.901 { 00:08:38.901 "nbd_device": "/dev/nbd1", 00:08:38.901 "bdev_name": "Nvme1n1p1" 00:08:38.901 }, 00:08:38.901 { 00:08:38.901 "nbd_device": "/dev/nbd2", 00:08:38.901 "bdev_name": "Nvme1n1p2" 00:08:38.901 }, 00:08:38.901 { 00:08:38.901 "nbd_device": "/dev/nbd3", 00:08:38.901 "bdev_name": "Nvme2n1" 00:08:38.901 }, 00:08:38.901 { 00:08:38.901 "nbd_device": "/dev/nbd4", 00:08:38.901 "bdev_name": "Nvme2n2" 00:08:38.901 }, 00:08:38.901 { 00:08:38.901 "nbd_device": "/dev/nbd5", 00:08:38.901 "bdev_name": "Nvme2n3" 00:08:38.901 }, 00:08:38.901 { 00:08:38.901 "nbd_device": "/dev/nbd6", 00:08:38.901 "bdev_name": "Nvme3n1" 00:08:38.901 } 00:08:38.901 ]' 00:08:38.901 15:01:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:38.901 15:01:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:08:38.901 15:01:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:38.901 15:01:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:08:38.901 15:01:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:38.901 15:01:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:38.901 15:01:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:38.901 15:01:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:39.470 15:01:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:39.470 15:01:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:39.470 15:01:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:39.470 15:01:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:39.470 15:01:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:39.470 15:01:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:39.470 15:01:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:39.470 15:01:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:39.470 15:01:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:39.470 15:01:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:39.470 15:01:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:39.470 15:01:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:39.470 15:01:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:39.470 15:01:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:39.470 15:01:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:39.470 15:01:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:39.470 15:01:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:39.470 15:01:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:39.470 15:01:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:39.470 15:01:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:39.728 15:01:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:39.728 15:01:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:39.728 15:01:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:39.728 15:01:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:39.728 15:01:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:39.728 15:01:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:39.728 15:01:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:39.728 15:01:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:39.728 15:01:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:39.728 15:01:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:39.987 15:01:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:39.987 15:01:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:39.987 15:01:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:39.987 15:01:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:39.987 15:01:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:39.987 15:01:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:39.987 15:01:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:39.987 15:01:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:39.987 15:01:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:39.987 15:01:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:40.246 15:01:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:40.246 15:01:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:40.246 15:01:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:40.246 15:01:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:40.246 15:01:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:40.246 15:01:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:40.246 15:01:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:40.246 15:01:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:40.246 15:01:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:40.246 15:01:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:40.505 15:01:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:40.505 15:01:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:40.505 15:01:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:40.505 15:01:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:40.505 15:01:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:40.505 15:01:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:40.505 15:01:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:40.505 15:01:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:40.505 15:01:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:40.505 15:01:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:08:40.763 15:01:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:08:40.763 15:01:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:08:40.763 15:01:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:08:40.763 15:01:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:40.763 15:01:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:40.763 15:01:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:08:40.763 15:01:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:40.764 15:01:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:40.764 15:01:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:40.764 15:01:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:40.764 15:01:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:41.022 15:01:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:41.022 15:01:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:41.022 15:01:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:41.022 15:01:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:41.022 15:01:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:41.022 15:01:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:41.022 15:01:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:41.022 15:01:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:41.022 15:01:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:41.022 15:01:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:41.022 15:01:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:41.022 15:01:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:41.022 15:01:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:41.022 15:01:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:41.022 15:01:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:41.022 15:01:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:41.022 15:01:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:41.022 15:01:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:41.022 15:01:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:41.022 15:01:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:41.022 15:01:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:41.022 15:01:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:41.023 15:01:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:41.023 15:01:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:41.023 15:01:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:41.023 15:01:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:41.023 15:01:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:41.023 15:01:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:08:41.281 /dev/nbd0 00:08:41.281 15:01:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:41.281 15:01:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:41.281 15:01:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:41.281 15:01:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:41.281 15:01:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:41.281 15:01:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:41.281 15:01:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:41.281 15:01:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:41.281 15:01:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:41.281 15:01:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:41.281 15:01:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:41.281 1+0 records in 00:08:41.281 1+0 records out 00:08:41.281 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00060865 s, 6.7 MB/s 00:08:41.281 15:01:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:41.281 15:01:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:41.281 15:01:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:41.281 15:01:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:41.281 15:01:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:41.281 15:01:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:41.281 15:01:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:41.281 15:01:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:08:41.550 /dev/nbd1 00:08:41.550 15:01:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:41.550 15:01:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:41.550 15:01:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:41.550 15:01:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:41.550 15:01:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:41.550 15:01:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:41.550 15:01:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:41.550 15:01:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:41.550 15:01:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:41.550 15:01:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:41.550 15:01:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:41.550 1+0 records in 00:08:41.550 1+0 records out 00:08:41.550 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000613043 s, 6.7 MB/s 00:08:41.550 15:01:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:41.550 15:01:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:41.550 15:01:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:41.550 15:01:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:41.550 15:01:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:41.550 15:01:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:41.550 15:01:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:41.550 15:01:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:08:41.808 /dev/nbd10 00:08:41.808 15:01:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:41.808 15:01:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:41.808 15:01:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:08:41.808 15:01:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:41.808 15:01:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:41.808 15:01:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:41.808 15:01:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:08:41.808 15:01:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:41.808 15:01:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:41.808 15:01:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:41.808 15:01:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:41.808 1+0 records in 00:08:41.808 1+0 records out 00:08:41.808 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000869506 s, 4.7 MB/s 00:08:41.808 15:01:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:41.808 15:01:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:41.808 15:01:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:41.808 15:01:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:41.808 15:01:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:41.808 15:01:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:41.808 15:01:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:41.808 15:01:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:08:42.067 /dev/nbd11 00:08:42.067 15:01:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:42.067 15:01:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:42.067 15:01:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:08:42.067 15:01:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:42.067 15:01:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:42.067 15:01:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:42.067 15:01:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:08:42.067 15:01:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:42.067 15:01:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:42.067 15:01:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:42.067 15:01:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:42.067 1+0 records in 00:08:42.067 1+0 records out 00:08:42.067 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000786227 s, 5.2 MB/s 00:08:42.067 15:01:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:42.067 15:01:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:42.067 15:01:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:42.067 15:01:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:42.067 15:01:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:42.067 15:01:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:42.067 15:01:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:42.067 15:01:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:08:42.325 /dev/nbd12 00:08:42.584 15:01:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:42.584 15:01:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:42.584 15:01:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:08:42.584 15:01:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:42.584 15:01:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:42.584 15:01:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:42.584 15:01:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:08:42.584 15:01:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:42.584 15:01:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:42.584 15:01:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:42.584 15:01:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:42.584 1+0 records in 00:08:42.584 1+0 records out 00:08:42.584 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00074245 s, 5.5 MB/s 00:08:42.584 15:01:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:42.584 15:01:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:42.584 15:01:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:42.584 15:01:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:42.584 15:01:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:42.584 15:01:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:42.584 15:01:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:42.584 15:01:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:08:42.844 /dev/nbd13 00:08:42.844 15:01:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:42.844 15:01:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:42.844 15:01:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:08:42.844 15:01:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:42.844 15:01:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:42.844 15:01:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:42.844 15:01:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:08:42.844 15:01:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:42.844 15:01:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:42.844 15:01:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:42.844 15:01:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:42.844 1+0 records in 00:08:42.844 1+0 records out 00:08:42.844 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00102186 s, 4.0 MB/s 00:08:42.844 15:01:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:42.844 15:01:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:42.844 15:01:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:42.844 15:01:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:42.844 15:01:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:42.844 15:01:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:42.844 15:01:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:42.844 15:01:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:08:43.103 /dev/nbd14 00:08:43.103 15:01:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:08:43.103 15:01:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:08:43.103 15:01:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:08:43.103 15:01:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:43.103 15:01:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:43.103 15:01:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:43.103 15:01:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:08:43.103 15:01:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:43.103 15:01:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:43.103 15:01:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:43.103 15:01:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:43.103 1+0 records in 00:08:43.103 1+0 records out 00:08:43.103 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000554293 s, 7.4 MB/s 00:08:43.103 15:01:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:43.103 15:01:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:43.103 15:01:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:43.103 15:01:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:43.103 15:01:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:43.103 15:01:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:43.103 15:01:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:43.103 15:01:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:43.103 15:01:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:43.103 15:01:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:43.363 15:01:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:43.363 { 00:08:43.363 "nbd_device": "/dev/nbd0", 00:08:43.363 "bdev_name": "Nvme0n1" 00:08:43.363 }, 00:08:43.363 { 00:08:43.363 "nbd_device": "/dev/nbd1", 00:08:43.363 "bdev_name": "Nvme1n1p1" 00:08:43.363 }, 00:08:43.363 { 00:08:43.363 "nbd_device": "/dev/nbd10", 00:08:43.363 "bdev_name": "Nvme1n1p2" 00:08:43.363 }, 00:08:43.363 { 00:08:43.363 "nbd_device": "/dev/nbd11", 00:08:43.363 "bdev_name": "Nvme2n1" 00:08:43.363 }, 00:08:43.363 { 00:08:43.363 "nbd_device": "/dev/nbd12", 00:08:43.363 "bdev_name": "Nvme2n2" 00:08:43.363 }, 00:08:43.363 { 00:08:43.363 "nbd_device": "/dev/nbd13", 00:08:43.363 "bdev_name": "Nvme2n3" 00:08:43.363 }, 00:08:43.363 { 00:08:43.363 "nbd_device": "/dev/nbd14", 00:08:43.363 "bdev_name": "Nvme3n1" 00:08:43.363 } 00:08:43.363 ]' 00:08:43.363 15:01:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:43.363 { 00:08:43.363 "nbd_device": "/dev/nbd0", 00:08:43.363 "bdev_name": "Nvme0n1" 00:08:43.363 }, 00:08:43.363 { 00:08:43.363 "nbd_device": "/dev/nbd1", 00:08:43.363 "bdev_name": "Nvme1n1p1" 00:08:43.363 }, 00:08:43.363 { 00:08:43.363 "nbd_device": "/dev/nbd10", 00:08:43.363 "bdev_name": "Nvme1n1p2" 00:08:43.363 }, 00:08:43.363 { 00:08:43.363 "nbd_device": "/dev/nbd11", 00:08:43.363 "bdev_name": "Nvme2n1" 00:08:43.363 }, 00:08:43.363 { 00:08:43.363 "nbd_device": "/dev/nbd12", 00:08:43.363 "bdev_name": "Nvme2n2" 00:08:43.363 }, 00:08:43.363 { 00:08:43.363 "nbd_device": "/dev/nbd13", 00:08:43.363 "bdev_name": "Nvme2n3" 00:08:43.363 }, 00:08:43.363 { 00:08:43.363 "nbd_device": "/dev/nbd14", 00:08:43.363 "bdev_name": "Nvme3n1" 00:08:43.363 } 00:08:43.363 ]' 00:08:43.363 15:01:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:43.363 15:01:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:43.363 /dev/nbd1 00:08:43.363 /dev/nbd10 00:08:43.363 /dev/nbd11 00:08:43.363 /dev/nbd12 00:08:43.363 /dev/nbd13 00:08:43.363 /dev/nbd14' 00:08:43.363 15:01:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:43.363 /dev/nbd1 00:08:43.363 /dev/nbd10 00:08:43.363 /dev/nbd11 00:08:43.363 /dev/nbd12 00:08:43.363 /dev/nbd13 00:08:43.363 /dev/nbd14' 00:08:43.363 15:01:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:43.363 15:01:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:08:43.363 15:01:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:08:43.363 15:01:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:08:43.363 15:01:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:08:43.363 15:01:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:08:43.363 15:01:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:43.363 15:01:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:43.363 15:01:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:43.363 15:01:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:43.363 15:01:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:43.363 15:01:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:43.363 256+0 records in 00:08:43.363 256+0 records out 00:08:43.363 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00520095 s, 202 MB/s 00:08:43.363 15:01:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:43.363 15:01:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:43.623 256+0 records in 00:08:43.623 256+0 records out 00:08:43.623 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.153205 s, 6.8 MB/s 00:08:43.623 15:01:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:43.624 15:01:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:43.624 256+0 records in 00:08:43.624 256+0 records out 00:08:43.624 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.152528 s, 6.9 MB/s 00:08:43.624 15:01:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:43.624 15:01:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:43.884 256+0 records in 00:08:43.884 256+0 records out 00:08:43.884 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.159373 s, 6.6 MB/s 00:08:43.884 15:01:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:43.884 15:01:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:44.144 256+0 records in 00:08:44.144 256+0 records out 00:08:44.144 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.154527 s, 6.8 MB/s 00:08:44.144 15:01:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:44.144 15:01:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:44.144 256+0 records in 00:08:44.144 256+0 records out 00:08:44.144 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.152563 s, 6.9 MB/s 00:08:44.144 15:01:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:44.144 15:01:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:44.403 256+0 records in 00:08:44.403 256+0 records out 00:08:44.403 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147653 s, 7.1 MB/s 00:08:44.403 15:01:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:44.403 15:01:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:08:44.663 256+0 records in 00:08:44.663 256+0 records out 00:08:44.663 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.142065 s, 7.4 MB/s 00:08:44.663 15:01:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:08:44.663 15:01:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:44.663 15:01:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:44.663 15:01:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:44.663 15:01:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:44.663 15:01:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:44.663 15:01:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:44.663 15:01:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:44.663 15:01:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:44.663 15:01:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:44.663 15:01:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:44.663 15:01:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:44.663 15:01:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:44.663 15:01:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:44.663 15:01:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:44.663 15:01:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:44.663 15:01:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:44.663 15:01:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:44.663 15:01:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:44.663 15:01:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:44.663 15:01:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:08:44.663 15:01:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:44.663 15:01:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:44.663 15:01:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:44.663 15:01:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:44.663 15:01:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:44.663 15:01:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:44.663 15:01:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:44.663 15:01:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:44.923 15:01:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:44.923 15:01:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:44.923 15:01:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:44.923 15:01:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:44.923 15:01:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:44.923 15:01:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:44.923 15:01:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:44.923 15:01:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:44.923 15:01:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:44.923 15:01:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:45.182 15:01:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:45.182 15:01:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:45.182 15:01:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:45.182 15:01:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:45.182 15:01:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:45.182 15:01:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:45.182 15:01:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:45.182 15:01:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:45.182 15:01:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:45.182 15:01:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:45.441 15:01:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:45.441 15:01:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:45.441 15:01:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:45.441 15:01:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:45.441 15:01:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:45.441 15:01:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:45.441 15:01:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:45.441 15:01:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:45.441 15:01:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:45.441 15:01:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:45.723 15:01:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:45.723 15:01:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:45.723 15:01:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:45.723 15:01:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:45.723 15:01:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:45.723 15:01:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:45.723 15:01:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:45.723 15:01:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:45.723 15:01:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:45.723 15:01:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:45.723 15:01:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:45.723 15:01:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:45.723 15:01:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:45.723 15:01:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:45.723 15:01:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:45.723 15:01:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:45.723 15:01:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:45.723 15:01:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:45.723 15:01:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:45.723 15:01:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:45.982 15:01:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:45.982 15:01:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:45.982 15:01:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:45.982 15:01:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:45.982 15:01:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:45.982 15:01:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:45.982 15:01:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:45.982 15:01:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:45.982 15:01:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:45.982 15:01:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:08:46.240 15:01:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:08:46.240 15:01:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:08:46.240 15:01:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:08:46.240 15:01:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:46.240 15:01:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:46.240 15:01:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:08:46.240 15:01:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:46.240 15:01:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:46.240 15:01:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:46.240 15:01:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:46.240 15:01:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:46.498 15:01:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:46.498 15:01:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:46.498 15:01:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:46.498 15:01:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:46.498 15:01:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:46.498 15:01:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:46.498 15:01:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:46.498 15:01:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:46.498 15:01:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:46.498 15:01:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:46.498 15:01:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:46.498 15:01:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:46.498 15:01:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:46.498 15:01:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:46.498 15:01:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:08:46.498 15:01:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:46.756 malloc_lvol_verify 00:08:46.756 15:01:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:47.015 381a5c19-a4eb-4e36-9923-bdcf5b2b9b26 00:08:47.015 15:01:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:47.582 f823c4f4-6ef3-4671-9d8d-5240f66058b3 00:08:47.582 15:01:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:47.841 /dev/nbd0 00:08:47.841 15:01:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:08:47.841 15:01:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:08:47.841 15:01:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:08:47.841 15:01:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:08:47.841 15:01:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:08:47.841 mke2fs 1.47.0 (5-Feb-2023) 00:08:47.841 Discarding device blocks: 0/4096 done 00:08:47.841 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:47.841 00:08:47.841 Allocating group tables: 0/1 done 00:08:47.841 Writing inode tables: 0/1 done 00:08:47.841 Creating journal (1024 blocks): done 00:08:47.841 Writing superblocks and filesystem accounting information: 0/1 done 00:08:47.841 00:08:47.841 15:01:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:47.841 15:01:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:47.841 15:01:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:47.841 15:01:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:47.841 15:01:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:47.841 15:01:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:47.841 15:01:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:48.099 15:01:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:48.099 15:01:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:48.099 15:01:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:48.099 15:01:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:48.099 15:01:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:48.099 15:01:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:48.099 15:01:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:48.099 15:01:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:48.099 15:01:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 62774 00:08:48.099 15:01:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 62774 ']' 00:08:48.099 15:01:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 62774 00:08:48.099 15:01:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:08:48.099 15:01:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:48.099 15:01:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62774 00:08:48.099 15:01:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:48.099 15:01:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:48.099 15:01:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62774' 00:08:48.099 killing process with pid 62774 00:08:48.099 15:01:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 62774 00:08:48.099 15:01:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 62774 00:08:49.995 15:01:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:08:49.995 00:08:49.995 real 0m14.446s 00:08:49.995 user 0m18.802s 00:08:49.995 sys 0m6.025s 00:08:49.995 15:01:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:49.995 15:01:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:49.995 ************************************ 00:08:49.995 END TEST bdev_nbd 00:08:49.995 ************************************ 00:08:49.995 15:01:50 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:08:49.995 15:01:50 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:08:49.995 15:01:50 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:08:49.995 15:01:50 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:49.995 skipping fio tests on NVMe due to multi-ns failures. 00:08:49.995 15:01:50 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:49.995 15:01:50 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:49.995 15:01:50 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:08:49.995 15:01:50 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:49.995 15:01:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:49.995 ************************************ 00:08:49.995 START TEST bdev_verify 00:08:49.995 ************************************ 00:08:49.995 15:01:50 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:49.995 [2024-11-20 15:01:50.466833] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:08:49.995 [2024-11-20 15:01:50.466998] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63223 ] 00:08:49.995 [2024-11-20 15:01:50.659039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:50.278 [2024-11-20 15:01:50.856282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.278 [2024-11-20 15:01:50.856289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:50.869 Running I/O for 5 seconds... 00:08:53.284 20928.00 IOPS, 81.75 MiB/s [2024-11-20T15:01:55.056Z] 21696.00 IOPS, 84.75 MiB/s [2024-11-20T15:01:55.992Z] 21546.67 IOPS, 84.17 MiB/s [2024-11-20T15:01:56.929Z] 21328.00 IOPS, 83.31 MiB/s [2024-11-20T15:01:56.929Z] 21184.00 IOPS, 82.75 MiB/s 00:08:56.093 Latency(us) 00:08:56.093 [2024-11-20T15:01:56.929Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:56.093 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:56.093 Verification LBA range: start 0x0 length 0xbd0bd 00:08:56.093 Nvme0n1 : 5.08 1523.14 5.95 0.00 0.00 83580.56 12212.33 91803.04 00:08:56.093 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:56.093 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:56.093 Nvme0n1 : 5.10 1455.74 5.69 0.00 0.00 87722.99 16739.32 132230.07 00:08:56.093 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:56.093 Verification LBA range: start 0x0 length 0x4ff80 00:08:56.093 Nvme1n1p1 : 5.09 1522.17 5.95 0.00 0.00 83449.79 13791.51 84222.97 00:08:56.093 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:56.093 Verification LBA range: start 0x4ff80 length 0x4ff80 00:08:56.093 Nvme1n1p1 : 5.10 1455.28 5.68 0.00 0.00 87586.86 16739.32 134756.76 00:08:56.093 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:56.093 Verification LBA range: start 0x0 length 0x4ff7f 00:08:56.093 Nvme1n1p2 : 5.10 1530.20 5.98 0.00 0.00 83143.22 11264.82 76221.79 00:08:56.093 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:56.093 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:08:56.093 Nvme1n1p2 : 5.10 1454.47 5.68 0.00 0.00 87442.69 17897.38 136441.21 00:08:56.093 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:56.093 Verification LBA range: start 0x0 length 0x80000 00:08:56.093 Nvme2n1 : 5.10 1529.79 5.98 0.00 0.00 82993.08 11317.46 72852.87 00:08:56.093 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:56.093 Verification LBA range: start 0x80000 length 0x80000 00:08:56.093 Nvme2n1 : 5.11 1453.87 5.68 0.00 0.00 87287.90 19055.45 131387.84 00:08:56.093 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:56.093 Verification LBA range: start 0x0 length 0x80000 00:08:56.093 Nvme2n2 : 5.11 1529.16 5.97 0.00 0.00 82855.55 12159.69 74958.44 00:08:56.093 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:56.093 Verification LBA range: start 0x80000 length 0x80000 00:08:56.093 Nvme2n2 : 5.11 1453.24 5.68 0.00 0.00 87141.68 20002.96 128018.92 00:08:56.093 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:56.093 Verification LBA range: start 0x0 length 0x80000 00:08:56.093 Nvme2n3 : 5.11 1528.48 5.97 0.00 0.00 82693.10 13107.20 77485.13 00:08:56.093 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:56.093 Verification LBA range: start 0x80000 length 0x80000 00:08:56.093 Nvme2n3 : 5.11 1452.59 5.67 0.00 0.00 87037.93 20634.63 127176.69 00:08:56.093 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:56.093 Verification LBA range: start 0x0 length 0x20000 00:08:56.093 Nvme3n1 : 5.11 1527.78 5.97 0.00 0.00 82566.52 14212.63 78748.48 00:08:56.093 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:56.093 Verification LBA range: start 0x20000 length 0x20000 00:08:56.093 Nvme3n1 : 5.11 1451.96 5.67 0.00 0.00 86909.70 19266.00 129703.38 00:08:56.093 [2024-11-20T15:01:56.929Z] =================================================================================================================== 00:08:56.093 [2024-11-20T15:01:56.929Z] Total : 20867.88 81.52 0.00 0.00 85120.43 11264.82 136441.21 00:08:58.000 00:08:58.000 real 0m8.036s 00:08:58.000 user 0m14.672s 00:08:58.000 sys 0m0.421s 00:08:58.001 15:01:58 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:58.001 15:01:58 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:58.001 ************************************ 00:08:58.001 END TEST bdev_verify 00:08:58.001 ************************************ 00:08:58.001 15:01:58 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:58.001 15:01:58 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:08:58.001 15:01:58 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:58.001 15:01:58 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:58.001 ************************************ 00:08:58.001 START TEST bdev_verify_big_io 00:08:58.001 ************************************ 00:08:58.001 15:01:58 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:58.001 [2024-11-20 15:01:58.576422] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:08:58.001 [2024-11-20 15:01:58.576565] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63329 ] 00:08:58.001 [2024-11-20 15:01:58.765597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:58.260 [2024-11-20 15:01:58.914812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.260 [2024-11-20 15:01:58.914844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:59.203 Running I/O for 5 seconds... 00:09:02.346 1092.00 IOPS, 68.25 MiB/s [2024-11-20T15:02:05.085Z] 1591.50 IOPS, 99.47 MiB/s [2024-11-20T15:02:06.021Z] 2032.67 IOPS, 127.04 MiB/s [2024-11-20T15:02:06.021Z] 2788.50 IOPS, 174.28 MiB/s 00:09:05.185 Latency(us) 00:09:05.186 [2024-11-20T15:02:06.022Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:05.186 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:05.186 Verification LBA range: start 0x0 length 0xbd0b 00:09:05.186 Nvme0n1 : 5.75 134.37 8.40 0.00 0.00 916423.26 35794.76 950035.12 00:09:05.186 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:05.186 Verification LBA range: start 0xbd0b length 0xbd0b 00:09:05.186 Nvme0n1 : 5.69 118.00 7.38 0.00 0.00 1034998.81 43164.27 1273451.33 00:09:05.186 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:05.186 Verification LBA range: start 0x0 length 0x4ff8 00:09:05.186 Nvme1n1p1 : 5.75 133.30 8.33 0.00 0.00 897142.05 98961.99 832122.96 00:09:05.186 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:05.186 Verification LBA range: start 0x4ff8 length 0x4ff8 00:09:05.186 Nvme1n1p1 : 5.75 122.34 7.65 0.00 0.00 987621.12 79169.59 1542964.84 00:09:05.186 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:05.186 Verification LBA range: start 0x0 length 0x4ff7 00:09:05.186 Nvme1n1p2 : 5.80 136.77 8.55 0.00 0.00 863595.06 55587.16 1051102.69 00:09:05.186 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:05.186 Verification LBA range: start 0x4ff7 length 0x4ff7 00:09:05.186 Nvme1n1p2 : 5.70 134.75 8.42 0.00 0.00 882935.30 101909.80 798433.77 00:09:05.186 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:05.186 Verification LBA range: start 0x0 length 0x8000 00:09:05.186 Nvme2n1 : 5.80 130.02 8.13 0.00 0.00 880807.81 55587.16 1509275.66 00:09:05.186 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:05.186 Verification LBA range: start 0x8000 length 0x8000 00:09:05.186 Nvme2n1 : 5.76 138.27 8.64 0.00 0.00 842315.69 53902.70 811909.45 00:09:05.186 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:05.186 Verification LBA range: start 0x0 length 0x8000 00:09:05.186 Nvme2n2 : 5.80 135.13 8.45 0.00 0.00 834067.40 48007.09 1549702.68 00:09:05.186 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:05.186 Verification LBA range: start 0x8000 length 0x8000 00:09:05.186 Nvme2n2 : 5.82 143.12 8.95 0.00 0.00 794521.53 35584.21 842229.72 00:09:05.186 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:05.186 Verification LBA range: start 0x0 length 0x8000 00:09:05.186 Nvme2n3 : 5.85 148.27 9.27 0.00 0.00 743330.32 20108.23 1381256.74 00:09:05.186 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:05.186 Verification LBA range: start 0x8000 length 0x8000 00:09:05.186 Nvme2n3 : 5.82 148.21 9.26 0.00 0.00 751989.45 23266.60 859074.31 00:09:05.186 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:05.186 Verification LBA range: start 0x0 length 0x2000 00:09:05.186 Nvme3n1 : 5.91 164.60 10.29 0.00 0.00 655008.85 1177.81 1596867.55 00:09:05.186 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:05.186 Verification LBA range: start 0x2000 length 0x2000 00:09:05.186 Nvme3n1 : 5.84 158.83 9.93 0.00 0.00 685684.47 4421.71 875918.91 00:09:05.186 [2024-11-20T15:02:06.022Z] =================================================================================================================== 00:09:05.186 [2024-11-20T15:02:06.022Z] Total : 1946.00 121.63 0.00 0.00 830834.30 1177.81 1596867.55 00:09:07.721 00:09:07.721 real 0m9.505s 00:09:07.721 user 0m17.593s 00:09:07.721 sys 0m0.459s 00:09:07.721 15:02:07 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:07.721 15:02:07 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:09:07.721 ************************************ 00:09:07.721 END TEST bdev_verify_big_io 00:09:07.721 ************************************ 00:09:07.721 15:02:08 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:07.721 15:02:08 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:09:07.721 15:02:08 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:07.721 15:02:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:07.721 ************************************ 00:09:07.721 START TEST bdev_write_zeroes 00:09:07.721 ************************************ 00:09:07.721 15:02:08 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:07.721 [2024-11-20 15:02:08.170316] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:09:07.721 [2024-11-20 15:02:08.170510] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63449 ] 00:09:07.721 [2024-11-20 15:02:08.364824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.721 [2024-11-20 15:02:08.519758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.657 Running I/O for 1 seconds... 00:09:09.674 49776.00 IOPS, 194.44 MiB/s 00:09:09.674 Latency(us) 00:09:09.674 [2024-11-20T15:02:10.510Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:09.674 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:09.674 Nvme0n1 : 1.02 6794.45 26.54 0.00 0.00 18796.67 7053.67 176868.24 00:09:09.674 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:09.674 Nvme1n1p1 : 1.03 7176.71 28.03 0.00 0.00 17767.55 11422.74 81275.17 00:09:09.674 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:09.674 Nvme1n1p2 : 1.03 7168.87 28.00 0.00 0.00 17704.33 11370.10 82959.63 00:09:09.674 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:09.674 Nvme2n1 : 1.03 7161.65 27.98 0.00 0.00 17637.98 11949.13 83801.86 00:09:09.674 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:09.674 Nvme2n2 : 1.03 7154.38 27.95 0.00 0.00 17599.16 11685.94 84222.97 00:09:09.674 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:09.674 Nvme2n3 : 1.03 7147.22 27.92 0.00 0.00 17568.36 11212.18 84644.09 00:09:09.674 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:09.674 Nvme3n1 : 1.03 7140.09 27.89 0.00 0.00 17541.71 9790.92 85065.20 00:09:09.674 [2024-11-20T15:02:10.510Z] =================================================================================================================== 00:09:09.674 [2024-11-20T15:02:10.510Z] Total : 49743.36 194.31 0.00 0.00 17794.47 7053.67 176868.24 00:09:11.051 00:09:11.051 real 0m3.663s 00:09:11.051 user 0m3.167s 00:09:11.052 sys 0m0.373s 00:09:11.052 15:02:11 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:11.052 15:02:11 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:09:11.052 ************************************ 00:09:11.052 END TEST bdev_write_zeroes 00:09:11.052 ************************************ 00:09:11.052 15:02:11 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:11.052 15:02:11 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:09:11.052 15:02:11 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:11.052 15:02:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:11.052 ************************************ 00:09:11.052 START TEST bdev_json_nonenclosed 00:09:11.052 ************************************ 00:09:11.052 15:02:11 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:11.310 [2024-11-20 15:02:11.910270] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:09:11.310 [2024-11-20 15:02:11.910428] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63509 ] 00:09:11.310 [2024-11-20 15:02:12.100220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.569 [2024-11-20 15:02:12.249508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.569 [2024-11-20 15:02:12.249652] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:11.569 [2024-11-20 15:02:12.249679] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:11.569 [2024-11-20 15:02:12.249693] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:11.828 00:09:11.828 real 0m0.748s 00:09:11.828 user 0m0.470s 00:09:11.828 sys 0m0.172s 00:09:11.828 15:02:12 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:11.828 15:02:12 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:09:11.828 ************************************ 00:09:11.828 END TEST bdev_json_nonenclosed 00:09:11.828 ************************************ 00:09:11.829 15:02:12 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:11.829 15:02:12 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:09:11.829 15:02:12 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:11.829 15:02:12 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:11.829 ************************************ 00:09:11.829 START TEST bdev_json_nonarray 00:09:11.829 ************************************ 00:09:11.829 15:02:12 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:12.087 [2024-11-20 15:02:12.735285] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:09:12.087 [2024-11-20 15:02:12.735428] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63540 ] 00:09:12.346 [2024-11-20 15:02:12.923071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.346 [2024-11-20 15:02:13.072184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.346 [2024-11-20 15:02:13.072332] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:12.346 [2024-11-20 15:02:13.072361] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:12.346 [2024-11-20 15:02:13.072378] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:12.605 00:09:12.605 real 0m0.737s 00:09:12.605 user 0m0.436s 00:09:12.605 sys 0m0.195s 00:09:12.605 15:02:13 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:12.605 15:02:13 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:09:12.605 ************************************ 00:09:12.605 END TEST bdev_json_nonarray 00:09:12.605 ************************************ 00:09:12.605 15:02:13 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:09:12.605 15:02:13 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:09:12.605 15:02:13 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:09:12.605 15:02:13 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:12.605 15:02:13 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:12.605 15:02:13 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:12.864 ************************************ 00:09:12.864 START TEST bdev_gpt_uuid 00:09:12.864 ************************************ 00:09:12.864 15:02:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:09:12.864 15:02:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:09:12.864 15:02:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:09:12.864 15:02:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63567 00:09:12.864 15:02:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:12.864 15:02:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 63567 00:09:12.864 15:02:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:12.864 15:02:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 63567 ']' 00:09:12.864 15:02:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.864 15:02:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:12.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.864 15:02:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.864 15:02:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:12.864 15:02:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:12.864 [2024-11-20 15:02:13.571479] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:09:12.864 [2024-11-20 15:02:13.571646] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63567 ] 00:09:13.123 [2024-11-20 15:02:13.757943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.123 [2024-11-20 15:02:13.905769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.157 15:02:14 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:14.157 15:02:14 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:09:14.158 15:02:14 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:14.158 15:02:14 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.158 15:02:14 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:14.724 Some configs were skipped because the RPC state that can call them passed over. 00:09:14.724 15:02:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.724 15:02:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:09:14.724 15:02:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.724 15:02:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:14.724 15:02:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.724 15:02:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:09:14.724 15:02:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.724 15:02:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:14.724 15:02:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.724 15:02:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:09:14.725 { 00:09:14.725 "name": "Nvme1n1p1", 00:09:14.725 "aliases": [ 00:09:14.725 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:09:14.725 ], 00:09:14.725 "product_name": "GPT Disk", 00:09:14.725 "block_size": 4096, 00:09:14.725 "num_blocks": 655104, 00:09:14.725 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:14.725 "assigned_rate_limits": { 00:09:14.725 "rw_ios_per_sec": 0, 00:09:14.725 "rw_mbytes_per_sec": 0, 00:09:14.725 "r_mbytes_per_sec": 0, 00:09:14.725 "w_mbytes_per_sec": 0 00:09:14.725 }, 00:09:14.725 "claimed": false, 00:09:14.725 "zoned": false, 00:09:14.725 "supported_io_types": { 00:09:14.725 "read": true, 00:09:14.725 "write": true, 00:09:14.725 "unmap": true, 00:09:14.725 "flush": true, 00:09:14.725 "reset": true, 00:09:14.725 "nvme_admin": false, 00:09:14.725 "nvme_io": false, 00:09:14.725 "nvme_io_md": false, 00:09:14.725 "write_zeroes": true, 00:09:14.725 "zcopy": false, 00:09:14.725 "get_zone_info": false, 00:09:14.725 "zone_management": false, 00:09:14.725 "zone_append": false, 00:09:14.725 "compare": true, 00:09:14.725 "compare_and_write": false, 00:09:14.725 "abort": true, 00:09:14.725 "seek_hole": false, 00:09:14.725 "seek_data": false, 00:09:14.725 "copy": true, 00:09:14.725 "nvme_iov_md": false 00:09:14.725 }, 00:09:14.725 "driver_specific": { 00:09:14.725 "gpt": { 00:09:14.725 "base_bdev": "Nvme1n1", 00:09:14.725 "offset_blocks": 256, 00:09:14.725 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:09:14.725 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:14.725 "partition_name": "SPDK_TEST_first" 00:09:14.725 } 00:09:14.725 } 00:09:14.725 } 00:09:14.725 ]' 00:09:14.725 15:02:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:09:14.725 15:02:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:09:14.725 15:02:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:09:14.725 15:02:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:14.725 15:02:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:14.725 15:02:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:14.725 15:02:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:14.725 15:02:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.725 15:02:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:14.725 15:02:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.725 15:02:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:09:14.725 { 00:09:14.725 "name": "Nvme1n1p2", 00:09:14.725 "aliases": [ 00:09:14.725 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:09:14.725 ], 00:09:14.725 "product_name": "GPT Disk", 00:09:14.725 "block_size": 4096, 00:09:14.725 "num_blocks": 655103, 00:09:14.725 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:14.725 "assigned_rate_limits": { 00:09:14.725 "rw_ios_per_sec": 0, 00:09:14.725 "rw_mbytes_per_sec": 0, 00:09:14.725 "r_mbytes_per_sec": 0, 00:09:14.725 "w_mbytes_per_sec": 0 00:09:14.725 }, 00:09:14.725 "claimed": false, 00:09:14.725 "zoned": false, 00:09:14.725 "supported_io_types": { 00:09:14.725 "read": true, 00:09:14.725 "write": true, 00:09:14.725 "unmap": true, 00:09:14.725 "flush": true, 00:09:14.725 "reset": true, 00:09:14.725 "nvme_admin": false, 00:09:14.725 "nvme_io": false, 00:09:14.725 "nvme_io_md": false, 00:09:14.725 "write_zeroes": true, 00:09:14.725 "zcopy": false, 00:09:14.725 "get_zone_info": false, 00:09:14.725 "zone_management": false, 00:09:14.725 "zone_append": false, 00:09:14.725 "compare": true, 00:09:14.725 "compare_and_write": false, 00:09:14.725 "abort": true, 00:09:14.725 "seek_hole": false, 00:09:14.725 "seek_data": false, 00:09:14.725 "copy": true, 00:09:14.725 "nvme_iov_md": false 00:09:14.725 }, 00:09:14.725 "driver_specific": { 00:09:14.725 "gpt": { 00:09:14.725 "base_bdev": "Nvme1n1", 00:09:14.725 "offset_blocks": 655360, 00:09:14.725 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:09:14.725 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:14.725 "partition_name": "SPDK_TEST_second" 00:09:14.725 } 00:09:14.725 } 00:09:14.725 } 00:09:14.725 ]' 00:09:14.725 15:02:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:09:14.725 15:02:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:09:14.725 15:02:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:09:14.983 15:02:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:14.983 15:02:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:14.983 15:02:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:14.983 15:02:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 63567 00:09:14.983 15:02:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 63567 ']' 00:09:14.983 15:02:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 63567 00:09:14.983 15:02:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:09:14.983 15:02:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:14.983 15:02:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63567 00:09:14.983 15:02:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:14.983 15:02:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:14.983 killing process with pid 63567 00:09:14.983 15:02:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63567' 00:09:14.983 15:02:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 63567 00:09:14.983 15:02:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 63567 00:09:18.271 00:09:18.271 real 0m4.952s 00:09:18.271 user 0m4.873s 00:09:18.271 sys 0m0.767s 00:09:18.271 15:02:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:18.271 15:02:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:18.271 ************************************ 00:09:18.271 END TEST bdev_gpt_uuid 00:09:18.271 ************************************ 00:09:18.271 15:02:18 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:09:18.271 15:02:18 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:09:18.271 15:02:18 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:09:18.271 15:02:18 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:18.271 15:02:18 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:18.271 15:02:18 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:09:18.271 15:02:18 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:09:18.271 15:02:18 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:09:18.271 15:02:18 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:18.271 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:18.528 Waiting for block devices as requested 00:09:18.787 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:18.787 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:19.046 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:19.046 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:24.313 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:24.313 15:02:24 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:09:24.313 15:02:24 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:09:24.313 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:09:24.313 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:09:24.313 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:09:24.313 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:09:24.313 15:02:25 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:09:24.313 00:09:24.313 real 1m10.232s 00:09:24.313 user 1m26.497s 00:09:24.313 sys 0m14.063s 00:09:24.313 15:02:25 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:24.313 15:02:25 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:24.314 ************************************ 00:09:24.314 END TEST blockdev_nvme_gpt 00:09:24.314 ************************************ 00:09:24.572 15:02:25 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:24.572 15:02:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:24.572 15:02:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:24.572 15:02:25 -- common/autotest_common.sh@10 -- # set +x 00:09:24.572 ************************************ 00:09:24.572 START TEST nvme 00:09:24.572 ************************************ 00:09:24.572 15:02:25 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:24.572 * Looking for test storage... 00:09:24.572 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:24.572 15:02:25 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:24.572 15:02:25 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:09:24.572 15:02:25 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:24.832 15:02:25 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:24.832 15:02:25 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:24.832 15:02:25 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:24.832 15:02:25 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:24.832 15:02:25 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:09:24.832 15:02:25 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:09:24.832 15:02:25 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:09:24.832 15:02:25 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:09:24.832 15:02:25 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:09:24.832 15:02:25 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:09:24.832 15:02:25 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:09:24.832 15:02:25 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:24.832 15:02:25 nvme -- scripts/common.sh@344 -- # case "$op" in 00:09:24.832 15:02:25 nvme -- scripts/common.sh@345 -- # : 1 00:09:24.832 15:02:25 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:24.832 15:02:25 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:24.832 15:02:25 nvme -- scripts/common.sh@365 -- # decimal 1 00:09:24.832 15:02:25 nvme -- scripts/common.sh@353 -- # local d=1 00:09:24.832 15:02:25 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:24.832 15:02:25 nvme -- scripts/common.sh@355 -- # echo 1 00:09:24.832 15:02:25 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:09:24.832 15:02:25 nvme -- scripts/common.sh@366 -- # decimal 2 00:09:24.832 15:02:25 nvme -- scripts/common.sh@353 -- # local d=2 00:09:24.832 15:02:25 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:24.832 15:02:25 nvme -- scripts/common.sh@355 -- # echo 2 00:09:24.832 15:02:25 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:09:24.832 15:02:25 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:24.832 15:02:25 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:24.832 15:02:25 nvme -- scripts/common.sh@368 -- # return 0 00:09:24.832 15:02:25 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:24.832 15:02:25 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:24.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.832 --rc genhtml_branch_coverage=1 00:09:24.832 --rc genhtml_function_coverage=1 00:09:24.832 --rc genhtml_legend=1 00:09:24.832 --rc geninfo_all_blocks=1 00:09:24.832 --rc geninfo_unexecuted_blocks=1 00:09:24.832 00:09:24.832 ' 00:09:24.832 15:02:25 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:24.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.832 --rc genhtml_branch_coverage=1 00:09:24.832 --rc genhtml_function_coverage=1 00:09:24.832 --rc genhtml_legend=1 00:09:24.832 --rc geninfo_all_blocks=1 00:09:24.832 --rc geninfo_unexecuted_blocks=1 00:09:24.832 00:09:24.832 ' 00:09:24.832 15:02:25 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:24.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.832 --rc genhtml_branch_coverage=1 00:09:24.832 --rc genhtml_function_coverage=1 00:09:24.832 --rc genhtml_legend=1 00:09:24.832 --rc geninfo_all_blocks=1 00:09:24.832 --rc geninfo_unexecuted_blocks=1 00:09:24.832 00:09:24.832 ' 00:09:24.832 15:02:25 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:24.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.832 --rc genhtml_branch_coverage=1 00:09:24.832 --rc genhtml_function_coverage=1 00:09:24.832 --rc genhtml_legend=1 00:09:24.832 --rc geninfo_all_blocks=1 00:09:24.832 --rc geninfo_unexecuted_blocks=1 00:09:24.832 00:09:24.832 ' 00:09:24.832 15:02:25 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:25.402 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:26.339 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:26.339 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:26.339 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:26.339 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:26.339 15:02:27 nvme -- nvme/nvme.sh@79 -- # uname 00:09:26.339 15:02:27 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:09:26.339 15:02:27 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:09:26.339 15:02:27 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:09:26.339 15:02:27 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:09:26.339 15:02:27 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:09:26.339 15:02:27 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:09:26.339 15:02:27 nvme -- common/autotest_common.sh@1075 -- # stubpid=64235 00:09:26.339 Waiting for stub to ready for secondary processes... 00:09:26.339 15:02:27 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:09:26.339 15:02:27 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:26.339 15:02:27 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64235 ]] 00:09:26.339 15:02:27 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:09:26.339 15:02:27 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:09:26.598 [2024-11-20 15:02:27.202308] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:09:26.598 [2024-11-20 15:02:27.202489] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:09:27.596 15:02:28 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:27.596 15:02:28 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64235 ]] 00:09:27.596 15:02:28 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:09:28.166 [2024-11-20 15:02:28.918241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:28.425 [2024-11-20 15:02:29.055135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:28.425 [2024-11-20 15:02:29.055277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:28.425 [2024-11-20 15:02:29.055313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:28.425 [2024-11-20 15:02:29.077793] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:09:28.425 [2024-11-20 15:02:29.077860] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:28.425 [2024-11-20 15:02:29.095271] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:09:28.425 [2024-11-20 15:02:29.095547] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:09:28.425 [2024-11-20 15:02:29.100742] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:28.425 [2024-11-20 15:02:29.101120] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:09:28.425 [2024-11-20 15:02:29.101274] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:09:28.425 [2024-11-20 15:02:29.105459] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:28.425 [2024-11-20 15:02:29.105777] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:09:28.425 [2024-11-20 15:02:29.105906] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:09:28.425 [2024-11-20 15:02:29.109872] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:28.425 [2024-11-20 15:02:29.110080] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:09:28.425 [2024-11-20 15:02:29.110169] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:09:28.425 [2024-11-20 15:02:29.110229] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:09:28.425 [2024-11-20 15:02:29.110310] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:09:28.425 15:02:29 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:28.425 15:02:29 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:09:28.425 done. 00:09:28.425 15:02:29 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:28.425 15:02:29 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:09:28.425 15:02:29 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:28.425 15:02:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:28.425 ************************************ 00:09:28.425 START TEST nvme_reset 00:09:28.425 ************************************ 00:09:28.425 15:02:29 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:28.684 Initializing NVMe Controllers 00:09:28.684 Skipping QEMU NVMe SSD at 0000:00:10.0 00:09:28.684 Skipping QEMU NVMe SSD at 0000:00:11.0 00:09:28.684 Skipping QEMU NVMe SSD at 0000:00:13.0 00:09:28.684 Skipping QEMU NVMe SSD at 0000:00:12.0 00:09:28.684 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:09:28.684 00:09:28.684 real 0m0.311s 00:09:28.684 user 0m0.106s 00:09:28.684 sys 0m0.162s 00:09:28.684 15:02:29 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:28.684 15:02:29 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:09:28.684 ************************************ 00:09:28.684 END TEST nvme_reset 00:09:28.684 ************************************ 00:09:28.943 15:02:29 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:09:28.943 15:02:29 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:28.943 15:02:29 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:28.943 15:02:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:28.943 ************************************ 00:09:28.943 START TEST nvme_identify 00:09:28.943 ************************************ 00:09:28.943 15:02:29 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:09:28.943 15:02:29 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:09:28.943 15:02:29 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:09:28.943 15:02:29 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:09:28.943 15:02:29 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:09:28.943 15:02:29 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:28.943 15:02:29 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:09:28.943 15:02:29 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:28.943 15:02:29 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:28.943 15:02:29 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:28.943 15:02:29 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:28.943 15:02:29 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:28.943 15:02:29 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:09:29.205 [2024-11-20 15:02:29.997125] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 64268 terminated unexpected 00:09:29.205 ===================================================== 00:09:29.205 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:29.205 ===================================================== 00:09:29.205 Controller Capabilities/Features 00:09:29.205 ================================ 00:09:29.205 Vendor ID: 1b36 00:09:29.205 Subsystem Vendor ID: 1af4 00:09:29.205 Serial Number: 12340 00:09:29.205 Model Number: QEMU NVMe Ctrl 00:09:29.205 Firmware Version: 8.0.0 00:09:29.205 Recommended Arb Burst: 6 00:09:29.205 IEEE OUI Identifier: 00 54 52 00:09:29.205 Multi-path I/O 00:09:29.205 May have multiple subsystem ports: No 00:09:29.205 May have multiple controllers: No 00:09:29.205 Associated with SR-IOV VF: No 00:09:29.205 Max Data Transfer Size: 524288 00:09:29.205 Max Number of Namespaces: 256 00:09:29.205 Max Number of I/O Queues: 64 00:09:29.205 NVMe Specification Version (VS): 1.4 00:09:29.205 NVMe Specification Version (Identify): 1.4 00:09:29.205 Maximum Queue Entries: 2048 00:09:29.205 Contiguous Queues Required: Yes 00:09:29.205 Arbitration Mechanisms Supported 00:09:29.205 Weighted Round Robin: Not Supported 00:09:29.205 Vendor Specific: Not Supported 00:09:29.205 Reset Timeout: 7500 ms 00:09:29.205 Doorbell Stride: 4 bytes 00:09:29.205 NVM Subsystem Reset: Not Supported 00:09:29.205 Command Sets Supported 00:09:29.206 NVM Command Set: Supported 00:09:29.206 Boot Partition: Not Supported 00:09:29.206 Memory Page Size Minimum: 4096 bytes 00:09:29.206 Memory Page Size Maximum: 65536 bytes 00:09:29.206 Persistent Memory Region: Not Supported 00:09:29.206 Optional Asynchronous Events Supported 00:09:29.206 Namespace Attribute Notices: Supported 00:09:29.206 Firmware Activation Notices: Not Supported 00:09:29.206 ANA Change Notices: Not Supported 00:09:29.206 PLE Aggregate Log Change Notices: Not Supported 00:09:29.206 LBA Status Info Alert Notices: Not Supported 00:09:29.206 EGE Aggregate Log Change Notices: Not Supported 00:09:29.206 Normal NVM Subsystem Shutdown event: Not Supported 00:09:29.206 Zone Descriptor Change Notices: Not Supported 00:09:29.206 Discovery Log Change Notices: Not Supported 00:09:29.206 Controller Attributes 00:09:29.206 128-bit Host Identifier: Not Supported 00:09:29.206 Non-Operational Permissive Mode: Not Supported 00:09:29.206 NVM Sets: Not Supported 00:09:29.206 Read Recovery Levels: Not Supported 00:09:29.206 Endurance Groups: Not Supported 00:09:29.206 Predictable Latency Mode: Not Supported 00:09:29.206 Traffic Based Keep ALive: Not Supported 00:09:29.206 Namespace Granularity: Not Supported 00:09:29.206 SQ Associations: Not Supported 00:09:29.206 UUID List: Not Supported 00:09:29.206 Multi-Domain Subsystem: Not Supported 00:09:29.206 Fixed Capacity Management: Not Supported 00:09:29.206 Variable Capacity Management: Not Supported 00:09:29.206 Delete Endurance Group: Not Supported 00:09:29.206 Delete NVM Set: Not Supported 00:09:29.206 Extended LBA Formats Supported: Supported 00:09:29.206 Flexible Data Placement Supported: Not Supported 00:09:29.206 00:09:29.206 Controller Memory Buffer Support 00:09:29.206 ================================ 00:09:29.206 Supported: No 00:09:29.206 00:09:29.206 Persistent Memory Region Support 00:09:29.206 ================================ 00:09:29.206 Supported: No 00:09:29.206 00:09:29.206 Admin Command Set Attributes 00:09:29.206 ============================ 00:09:29.206 Security Send/Receive: Not Supported 00:09:29.206 Format NVM: Supported 00:09:29.206 Firmware Activate/Download: Not Supported 00:09:29.206 Namespace Management: Supported 00:09:29.206 Device Self-Test: Not Supported 00:09:29.206 Directives: Supported 00:09:29.206 NVMe-MI: Not Supported 00:09:29.206 Virtualization Management: Not Supported 00:09:29.206 Doorbell Buffer Config: Supported 00:09:29.206 Get LBA Status Capability: Not Supported 00:09:29.206 Command & Feature Lockdown Capability: Not Supported 00:09:29.206 Abort Command Limit: 4 00:09:29.206 Async Event Request Limit: 4 00:09:29.206 Number of Firmware Slots: N/A 00:09:29.206 Firmware Slot 1 Read-Only: N/A 00:09:29.206 Firmware Activation Without Reset: N/A 00:09:29.206 Multiple Update Detection Support: N/A 00:09:29.206 Firmware Update Granularity: No Information Provided 00:09:29.206 Per-Namespace SMART Log: Yes 00:09:29.206 Asymmetric Namespace Access Log Page: Not Supported 00:09:29.206 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:29.206 Command Effects Log Page: Supported 00:09:29.206 Get Log Page Extended Data: Supported 00:09:29.206 Telemetry Log Pages: Not Supported 00:09:29.206 Persistent Event Log Pages: Not Supported 00:09:29.206 Supported Log Pages Log Page: May Support 00:09:29.206 Commands Supported & Effects Log Page: Not Supported 00:09:29.206 Feature Identifiers & Effects Log Page:May Support 00:09:29.206 NVMe-MI Commands & Effects Log Page: May Support 00:09:29.206 Data Area 4 for Telemetry Log: Not Supported 00:09:29.206 Error Log Page Entries Supported: 1 00:09:29.206 Keep Alive: Not Supported 00:09:29.206 00:09:29.206 NVM Command Set Attributes 00:09:29.206 ========================== 00:09:29.206 Submission Queue Entry Size 00:09:29.206 Max: 64 00:09:29.206 Min: 64 00:09:29.206 Completion Queue Entry Size 00:09:29.206 Max: 16 00:09:29.206 Min: 16 00:09:29.206 Number of Namespaces: 256 00:09:29.206 Compare Command: Supported 00:09:29.206 Write Uncorrectable Command: Not Supported 00:09:29.206 Dataset Management Command: Supported 00:09:29.206 Write Zeroes Command: Supported 00:09:29.206 Set Features Save Field: Supported 00:09:29.206 Reservations: Not Supported 00:09:29.206 Timestamp: Supported 00:09:29.206 Copy: Supported 00:09:29.206 Volatile Write Cache: Present 00:09:29.206 Atomic Write Unit (Normal): 1 00:09:29.206 Atomic Write Unit (PFail): 1 00:09:29.206 Atomic Compare & Write Unit: 1 00:09:29.206 Fused Compare & Write: Not Supported 00:09:29.206 Scatter-Gather List 00:09:29.206 SGL Command Set: Supported 00:09:29.206 SGL Keyed: Not Supported 00:09:29.206 SGL Bit Bucket Descriptor: Not Supported 00:09:29.206 SGL Metadata Pointer: Not Supported 00:09:29.206 Oversized SGL: Not Supported 00:09:29.206 SGL Metadata Address: Not Supported 00:09:29.206 SGL Offset: Not Supported 00:09:29.206 Transport SGL Data Block: Not Supported 00:09:29.206 Replay Protected Memory Block: Not Supported 00:09:29.206 00:09:29.206 Firmware Slot Information 00:09:29.206 ========================= 00:09:29.206 Active slot: 1 00:09:29.206 Slot 1 Firmware Revision: 1.0 00:09:29.206 00:09:29.206 00:09:29.206 Commands Supported and Effects 00:09:29.206 ============================== 00:09:29.206 Admin Commands 00:09:29.206 -------------- 00:09:29.206 Delete I/O Submission Queue (00h): Supported 00:09:29.206 Create I/O Submission Queue (01h): Supported 00:09:29.206 Get Log Page (02h): Supported 00:09:29.206 Delete I/O Completion Queue (04h): Supported 00:09:29.206 Create I/O Completion Queue (05h): Supported 00:09:29.206 Identify (06h): Supported 00:09:29.206 Abort (08h): Supported 00:09:29.206 Set Features (09h): Supported 00:09:29.206 Get Features (0Ah): Supported 00:09:29.206 Asynchronous Event Request (0Ch): Supported 00:09:29.206 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:29.206 Directive Send (19h): Supported 00:09:29.206 Directive Receive (1Ah): Supported 00:09:29.206 Virtualization Management (1Ch): Supported 00:09:29.206 Doorbell Buffer Config (7Ch): Supported 00:09:29.206 Format NVM (80h): Supported LBA-Change 00:09:29.206 I/O Commands 00:09:29.206 ------------ 00:09:29.206 Flush (00h): Supported LBA-Change 00:09:29.206 Write (01h): Supported LBA-Change 00:09:29.206 Read (02h): Supported 00:09:29.206 Compare (05h): Supported 00:09:29.206 Write Zeroes (08h): Supported LBA-Change 00:09:29.206 Dataset Management (09h): Supported LBA-Change 00:09:29.206 Unknown (0Ch): Supported 00:09:29.206 Unknown (12h): Supported 00:09:29.206 Copy (19h): Supported LBA-Change 00:09:29.206 Unknown (1Dh): Supported LBA-Change 00:09:29.206 00:09:29.206 Error Log 00:09:29.206 ========= 00:09:29.206 00:09:29.206 Arbitration 00:09:29.206 =========== 00:09:29.206 Arbitration Burst: no limit 00:09:29.206 00:09:29.206 Power Management 00:09:29.206 ================ 00:09:29.206 Number of Power States: 1 00:09:29.206 Current Power State: Power State #0 00:09:29.206 Power State #0: 00:09:29.206 Max Power: 25.00 W 00:09:29.206 Non-Operational State: Operational 00:09:29.206 Entry Latency: 16 microseconds 00:09:29.206 Exit Latency: 4 microseconds 00:09:29.206 Relative Read Throughput: 0 00:09:29.206 Relative Read Latency: 0 00:09:29.206 Relative Write Throughput: 0 00:09:29.206 Relative Write Latency: 0 00:09:29.206 Idle Power[2024-11-20 15:02:30.000085] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 64268 terminated unexpected 00:09:29.206 : Not Reported 00:09:29.206 Active Power: Not Reported 00:09:29.206 Non-Operational Permissive Mode: Not Supported 00:09:29.206 00:09:29.206 Health Information 00:09:29.206 ================== 00:09:29.206 Critical Warnings: 00:09:29.206 Available Spare Space: OK 00:09:29.206 Temperature: OK 00:09:29.206 Device Reliability: OK 00:09:29.206 Read Only: No 00:09:29.206 Volatile Memory Backup: OK 00:09:29.206 Current Temperature: 323 Kelvin (50 Celsius) 00:09:29.206 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:29.206 Available Spare: 0% 00:09:29.206 Available Spare Threshold: 0% 00:09:29.206 Life Percentage Used: 0% 00:09:29.206 Data Units Read: 739 00:09:29.206 Data Units Written: 667 00:09:29.206 Host Read Commands: 35372 00:09:29.206 Host Write Commands: 35158 00:09:29.206 Controller Busy Time: 0 minutes 00:09:29.206 Power Cycles: 0 00:09:29.206 Power On Hours: 0 hours 00:09:29.206 Unsafe Shutdowns: 0 00:09:29.206 Unrecoverable Media Errors: 0 00:09:29.207 Lifetime Error Log Entries: 0 00:09:29.207 Warning Temperature Time: 0 minutes 00:09:29.207 Critical Temperature Time: 0 minutes 00:09:29.207 00:09:29.207 Number of Queues 00:09:29.207 ================ 00:09:29.207 Number of I/O Submission Queues: 64 00:09:29.207 Number of I/O Completion Queues: 64 00:09:29.207 00:09:29.207 ZNS Specific Controller Data 00:09:29.207 ============================ 00:09:29.207 Zone Append Size Limit: 0 00:09:29.207 00:09:29.207 00:09:29.207 Active Namespaces 00:09:29.207 ================= 00:09:29.207 Namespace ID:1 00:09:29.207 Error Recovery Timeout: Unlimited 00:09:29.207 Command Set Identifier: NVM (00h) 00:09:29.207 Deallocate: Supported 00:09:29.207 Deallocated/Unwritten Error: Supported 00:09:29.207 Deallocated Read Value: All 0x00 00:09:29.207 Deallocate in Write Zeroes: Not Supported 00:09:29.207 Deallocated Guard Field: 0xFFFF 00:09:29.207 Flush: Supported 00:09:29.207 Reservation: Not Supported 00:09:29.207 Metadata Transferred as: Separate Metadata Buffer 00:09:29.207 Namespace Sharing Capabilities: Private 00:09:29.207 Size (in LBAs): 1548666 (5GiB) 00:09:29.207 Capacity (in LBAs): 1548666 (5GiB) 00:09:29.207 Utilization (in LBAs): 1548666 (5GiB) 00:09:29.207 Thin Provisioning: Not Supported 00:09:29.207 Per-NS Atomic Units: No 00:09:29.207 Maximum Single Source Range Length: 128 00:09:29.207 Maximum Copy Length: 128 00:09:29.207 Maximum Source Range Count: 128 00:09:29.207 NGUID/EUI64 Never Reused: No 00:09:29.207 Namespace Write Protected: No 00:09:29.207 Number of LBA Formats: 8 00:09:29.207 Current LBA Format: LBA Format #07 00:09:29.207 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:29.207 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:29.207 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:29.207 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:29.207 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:29.207 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:29.207 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:29.207 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:29.207 00:09:29.207 NVM Specific Namespace Data 00:09:29.207 =========================== 00:09:29.207 Logical Block Storage Tag Mask: 0 00:09:29.207 Protection Information Capabilities: 00:09:29.207 16b Guard Protection Information Storage Tag Support: No 00:09:29.207 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:29.207 Storage Tag Check Read Support: No 00:09:29.207 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.207 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.207 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.207 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.207 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.207 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.207 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.207 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.207 ===================================================== 00:09:29.207 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:29.207 ===================================================== 00:09:29.207 Controller Capabilities/Features 00:09:29.207 ================================ 00:09:29.207 Vendor ID: 1b36 00:09:29.207 Subsystem Vendor ID: 1af4 00:09:29.207 Serial Number: 12341 00:09:29.207 Model Number: QEMU NVMe Ctrl 00:09:29.207 Firmware Version: 8.0.0 00:09:29.207 Recommended Arb Burst: 6 00:09:29.207 IEEE OUI Identifier: 00 54 52 00:09:29.207 Multi-path I/O 00:09:29.207 May have multiple subsystem ports: No 00:09:29.207 May have multiple controllers: No 00:09:29.207 Associated with SR-IOV VF: No 00:09:29.207 Max Data Transfer Size: 524288 00:09:29.207 Max Number of Namespaces: 256 00:09:29.207 Max Number of I/O Queues: 64 00:09:29.207 NVMe Specification Version (VS): 1.4 00:09:29.207 NVMe Specification Version (Identify): 1.4 00:09:29.207 Maximum Queue Entries: 2048 00:09:29.207 Contiguous Queues Required: Yes 00:09:29.207 Arbitration Mechanisms Supported 00:09:29.207 Weighted Round Robin: Not Supported 00:09:29.207 Vendor Specific: Not Supported 00:09:29.207 Reset Timeout: 7500 ms 00:09:29.207 Doorbell Stride: 4 bytes 00:09:29.207 NVM Subsystem Reset: Not Supported 00:09:29.207 Command Sets Supported 00:09:29.207 NVM Command Set: Supported 00:09:29.207 Boot Partition: Not Supported 00:09:29.207 Memory Page Size Minimum: 4096 bytes 00:09:29.207 Memory Page Size Maximum: 65536 bytes 00:09:29.207 Persistent Memory Region: Not Supported 00:09:29.207 Optional Asynchronous Events Supported 00:09:29.207 Namespace Attribute Notices: Supported 00:09:29.207 Firmware Activation Notices: Not Supported 00:09:29.207 ANA Change Notices: Not Supported 00:09:29.207 PLE Aggregate Log Change Notices: Not Supported 00:09:29.207 LBA Status Info Alert Notices: Not Supported 00:09:29.207 EGE Aggregate Log Change Notices: Not Supported 00:09:29.207 Normal NVM Subsystem Shutdown event: Not Supported 00:09:29.207 Zone Descriptor Change Notices: Not Supported 00:09:29.207 Discovery Log Change Notices: Not Supported 00:09:29.207 Controller Attributes 00:09:29.207 128-bit Host Identifier: Not Supported 00:09:29.207 Non-Operational Permissive Mode: Not Supported 00:09:29.207 NVM Sets: Not Supported 00:09:29.207 Read Recovery Levels: Not Supported 00:09:29.207 Endurance Groups: Not Supported 00:09:29.207 Predictable Latency Mode: Not Supported 00:09:29.207 Traffic Based Keep ALive: Not Supported 00:09:29.207 Namespace Granularity: Not Supported 00:09:29.207 SQ Associations: Not Supported 00:09:29.207 UUID List: Not Supported 00:09:29.207 Multi-Domain Subsystem: Not Supported 00:09:29.207 Fixed Capacity Management: Not Supported 00:09:29.207 Variable Capacity Management: Not Supported 00:09:29.207 Delete Endurance Group: Not Supported 00:09:29.207 Delete NVM Set: Not Supported 00:09:29.207 Extended LBA Formats Supported: Supported 00:09:29.207 Flexible Data Placement Supported: Not Supported 00:09:29.207 00:09:29.207 Controller Memory Buffer Support 00:09:29.207 ================================ 00:09:29.207 Supported: No 00:09:29.207 00:09:29.207 Persistent Memory Region Support 00:09:29.207 ================================ 00:09:29.207 Supported: No 00:09:29.207 00:09:29.207 Admin Command Set Attributes 00:09:29.207 ============================ 00:09:29.207 Security Send/Receive: Not Supported 00:09:29.207 Format NVM: Supported 00:09:29.207 Firmware Activate/Download: Not Supported 00:09:29.207 Namespace Management: Supported 00:09:29.207 Device Self-Test: Not Supported 00:09:29.207 Directives: Supported 00:09:29.207 NVMe-MI: Not Supported 00:09:29.207 Virtualization Management: Not Supported 00:09:29.207 Doorbell Buffer Config: Supported 00:09:29.207 Get LBA Status Capability: Not Supported 00:09:29.207 Command & Feature Lockdown Capability: Not Supported 00:09:29.207 Abort Command Limit: 4 00:09:29.207 Async Event Request Limit: 4 00:09:29.207 Number of Firmware Slots: N/A 00:09:29.207 Firmware Slot 1 Read-Only: N/A 00:09:29.207 Firmware Activation Without Reset: N/A 00:09:29.207 Multiple Update Detection Support: N/A 00:09:29.207 Firmware Update Granularity: No Information Provided 00:09:29.207 Per-Namespace SMART Log: Yes 00:09:29.207 Asymmetric Namespace Access Log Page: Not Supported 00:09:29.207 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:29.207 Command Effects Log Page: Supported 00:09:29.207 Get Log Page Extended Data: Supported 00:09:29.207 Telemetry Log Pages: Not Supported 00:09:29.207 Persistent Event Log Pages: Not Supported 00:09:29.207 Supported Log Pages Log Page: May Support 00:09:29.207 Commands Supported & Effects Log Page: Not Supported 00:09:29.207 Feature Identifiers & Effects Log Page:May Support 00:09:29.207 NVMe-MI Commands & Effects Log Page: May Support 00:09:29.207 Data Area 4 for Telemetry Log: Not Supported 00:09:29.207 Error Log Page Entries Supported: 1 00:09:29.207 Keep Alive: Not Supported 00:09:29.207 00:09:29.207 NVM Command Set Attributes 00:09:29.207 ========================== 00:09:29.207 Submission Queue Entry Size 00:09:29.207 Max: 64 00:09:29.207 Min: 64 00:09:29.207 Completion Queue Entry Size 00:09:29.207 Max: 16 00:09:29.207 Min: 16 00:09:29.207 Number of Namespaces: 256 00:09:29.207 Compare Command: Supported 00:09:29.207 Write Uncorrectable Command: Not Supported 00:09:29.207 Dataset Management Command: Supported 00:09:29.207 Write Zeroes Command: Supported 00:09:29.207 Set Features Save Field: Supported 00:09:29.207 Reservations: Not Supported 00:09:29.207 Timestamp: Supported 00:09:29.207 Copy: Supported 00:09:29.207 Volatile Write Cache: Present 00:09:29.207 Atomic Write Unit (Normal): 1 00:09:29.208 Atomic Write Unit (PFail): 1 00:09:29.208 Atomic Compare & Write Unit: 1 00:09:29.208 Fused Compare & Write: Not Supported 00:09:29.208 Scatter-Gather List 00:09:29.208 SGL Command Set: Supported 00:09:29.208 SGL Keyed: Not Supported 00:09:29.208 SGL Bit Bucket Descriptor: Not Supported 00:09:29.208 SGL Metadata Pointer: Not Supported 00:09:29.208 Oversized SGL: Not Supported 00:09:29.208 SGL Metadata Address: Not Supported 00:09:29.208 SGL Offset: Not Supported 00:09:29.208 Transport SGL Data Block: Not Supported 00:09:29.208 Replay Protected Memory Block: Not Supported 00:09:29.208 00:09:29.208 Firmware Slot Information 00:09:29.208 ========================= 00:09:29.208 Active slot: 1 00:09:29.208 Slot 1 Firmware Revision: 1.0 00:09:29.208 00:09:29.208 00:09:29.208 Commands Supported and Effects 00:09:29.208 ============================== 00:09:29.208 Admin Commands 00:09:29.208 -------------- 00:09:29.208 Delete I/O Submission Queue (00h): Supported 00:09:29.208 Create I/O Submission Queue (01h): Supported 00:09:29.208 Get Log Page (02h): Supported 00:09:29.208 Delete I/O Completion Queue (04h): Supported 00:09:29.208 Create I/O Completion Queue (05h): Supported 00:09:29.208 Identify (06h): Supported 00:09:29.208 Abort (08h): Supported 00:09:29.208 Set Features (09h): Supported 00:09:29.208 Get Features (0Ah): Supported 00:09:29.208 Asynchronous Event Request (0Ch): Supported 00:09:29.208 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:29.208 Directive Send (19h): Supported 00:09:29.208 Directive Receive (1Ah): Supported 00:09:29.208 Virtualization Management (1Ch): Supported 00:09:29.208 Doorbell Buffer Config (7Ch): Supported 00:09:29.208 Format NVM (80h): Supported LBA-Change 00:09:29.208 I/O Commands 00:09:29.208 ------------ 00:09:29.208 Flush (00h): Supported LBA-Change 00:09:29.208 Write (01h): Supported LBA-Change 00:09:29.208 Read (02h): Supported 00:09:29.208 Compare (05h): Supported 00:09:29.208 Write Zeroes (08h): Supported LBA-Change 00:09:29.208 Dataset Management (09h): Supported LBA-Change 00:09:29.208 Unknown (0Ch): Supported 00:09:29.208 Unknown (12h): Supported 00:09:29.208 Copy (19h): Supported LBA-Change 00:09:29.208 Unknown (1Dh): Supported LBA-Change 00:09:29.208 00:09:29.208 Error Log 00:09:29.208 ========= 00:09:29.208 00:09:29.208 Arbitration 00:09:29.208 =========== 00:09:29.208 Arbitration Burst: no limit 00:09:29.208 00:09:29.208 Power Management 00:09:29.208 ================ 00:09:29.208 Number of Power States: 1 00:09:29.208 Current Power State: Power State #0 00:09:29.208 Power State #0: 00:09:29.208 Max Power: 25.00 W 00:09:29.208 Non-Operational State: Operational 00:09:29.208 Entry Latency: 16 microseconds 00:09:29.208 Exit Latency: 4 microseconds 00:09:29.208 Relative Read Throughput: 0 00:09:29.208 Relative Read Latency: 0 00:09:29.208 Relative Write Throughput: 0 00:09:29.208 Relative Write Latency: 0 00:09:29.208 Idle Power: Not Reported 00:09:29.208 Active Power: Not Reported 00:09:29.208 Non-Operational Permissive Mode: Not Supported 00:09:29.208 00:09:29.208 Health Information 00:09:29.208 ================== 00:09:29.208 Critical Warnings: 00:09:29.208 Available Spare Space: OK 00:09:29.208 Temperature: [2024-11-20 15:02:30.001870] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 64268 terminated unexpected 00:09:29.208 OK 00:09:29.208 Device Reliability: OK 00:09:29.208 Read Only: No 00:09:29.208 Volatile Memory Backup: OK 00:09:29.208 Current Temperature: 323 Kelvin (50 Celsius) 00:09:29.208 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:29.208 Available Spare: 0% 00:09:29.208 Available Spare Threshold: 0% 00:09:29.208 Life Percentage Used: 0% 00:09:29.208 Data Units Read: 1134 00:09:29.208 Data Units Written: 996 00:09:29.208 Host Read Commands: 53373 00:09:29.208 Host Write Commands: 52073 00:09:29.208 Controller Busy Time: 0 minutes 00:09:29.208 Power Cycles: 0 00:09:29.208 Power On Hours: 0 hours 00:09:29.208 Unsafe Shutdowns: 0 00:09:29.208 Unrecoverable Media Errors: 0 00:09:29.208 Lifetime Error Log Entries: 0 00:09:29.208 Warning Temperature Time: 0 minutes 00:09:29.208 Critical Temperature Time: 0 minutes 00:09:29.208 00:09:29.208 Number of Queues 00:09:29.208 ================ 00:09:29.208 Number of I/O Submission Queues: 64 00:09:29.208 Number of I/O Completion Queues: 64 00:09:29.208 00:09:29.208 ZNS Specific Controller Data 00:09:29.208 ============================ 00:09:29.208 Zone Append Size Limit: 0 00:09:29.208 00:09:29.208 00:09:29.208 Active Namespaces 00:09:29.208 ================= 00:09:29.208 Namespace ID:1 00:09:29.208 Error Recovery Timeout: Unlimited 00:09:29.208 Command Set Identifier: NVM (00h) 00:09:29.208 Deallocate: Supported 00:09:29.208 Deallocated/Unwritten Error: Supported 00:09:29.208 Deallocated Read Value: All 0x00 00:09:29.208 Deallocate in Write Zeroes: Not Supported 00:09:29.208 Deallocated Guard Field: 0xFFFF 00:09:29.208 Flush: Supported 00:09:29.208 Reservation: Not Supported 00:09:29.208 Namespace Sharing Capabilities: Private 00:09:29.208 Size (in LBAs): 1310720 (5GiB) 00:09:29.208 Capacity (in LBAs): 1310720 (5GiB) 00:09:29.208 Utilization (in LBAs): 1310720 (5GiB) 00:09:29.208 Thin Provisioning: Not Supported 00:09:29.208 Per-NS Atomic Units: No 00:09:29.208 Maximum Single Source Range Length: 128 00:09:29.208 Maximum Copy Length: 128 00:09:29.208 Maximum Source Range Count: 128 00:09:29.208 NGUID/EUI64 Never Reused: No 00:09:29.208 Namespace Write Protected: No 00:09:29.208 Number of LBA Formats: 8 00:09:29.208 Current LBA Format: LBA Format #04 00:09:29.208 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:29.208 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:29.208 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:29.208 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:29.208 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:29.208 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:29.208 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:29.208 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:29.208 00:09:29.208 NVM Specific Namespace Data 00:09:29.208 =========================== 00:09:29.208 Logical Block Storage Tag Mask: 0 00:09:29.208 Protection Information Capabilities: 00:09:29.208 16b Guard Protection Information Storage Tag Support: No 00:09:29.208 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:29.208 Storage Tag Check Read Support: No 00:09:29.208 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.208 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.208 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.208 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.208 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.208 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.208 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.208 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.208 ===================================================== 00:09:29.208 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:29.208 ===================================================== 00:09:29.208 Controller Capabilities/Features 00:09:29.208 ================================ 00:09:29.208 Vendor ID: 1b36 00:09:29.208 Subsystem Vendor ID: 1af4 00:09:29.208 Serial Number: 12343 00:09:29.208 Model Number: QEMU NVMe Ctrl 00:09:29.208 Firmware Version: 8.0.0 00:09:29.208 Recommended Arb Burst: 6 00:09:29.208 IEEE OUI Identifier: 00 54 52 00:09:29.208 Multi-path I/O 00:09:29.208 May have multiple subsystem ports: No 00:09:29.208 May have multiple controllers: Yes 00:09:29.208 Associated with SR-IOV VF: No 00:09:29.208 Max Data Transfer Size: 524288 00:09:29.208 Max Number of Namespaces: 256 00:09:29.208 Max Number of I/O Queues: 64 00:09:29.208 NVMe Specification Version (VS): 1.4 00:09:29.208 NVMe Specification Version (Identify): 1.4 00:09:29.208 Maximum Queue Entries: 2048 00:09:29.208 Contiguous Queues Required: Yes 00:09:29.208 Arbitration Mechanisms Supported 00:09:29.208 Weighted Round Robin: Not Supported 00:09:29.208 Vendor Specific: Not Supported 00:09:29.208 Reset Timeout: 7500 ms 00:09:29.208 Doorbell Stride: 4 bytes 00:09:29.208 NVM Subsystem Reset: Not Supported 00:09:29.208 Command Sets Supported 00:09:29.208 NVM Command Set: Supported 00:09:29.208 Boot Partition: Not Supported 00:09:29.208 Memory Page Size Minimum: 4096 bytes 00:09:29.208 Memory Page Size Maximum: 65536 bytes 00:09:29.208 Persistent Memory Region: Not Supported 00:09:29.208 Optional Asynchronous Events Supported 00:09:29.209 Namespace Attribute Notices: Supported 00:09:29.209 Firmware Activation Notices: Not Supported 00:09:29.209 ANA Change Notices: Not Supported 00:09:29.209 PLE Aggregate Log Change Notices: Not Supported 00:09:29.209 LBA Status Info Alert Notices: Not Supported 00:09:29.209 EGE Aggregate Log Change Notices: Not Supported 00:09:29.209 Normal NVM Subsystem Shutdown event: Not Supported 00:09:29.209 Zone Descriptor Change Notices: Not Supported 00:09:29.209 Discovery Log Change Notices: Not Supported 00:09:29.209 Controller Attributes 00:09:29.209 128-bit Host Identifier: Not Supported 00:09:29.209 Non-Operational Permissive Mode: Not Supported 00:09:29.209 NVM Sets: Not Supported 00:09:29.209 Read Recovery Levels: Not Supported 00:09:29.209 Endurance Groups: Supported 00:09:29.209 Predictable Latency Mode: Not Supported 00:09:29.209 Traffic Based Keep ALive: Not Supported 00:09:29.209 Namespace Granularity: Not Supported 00:09:29.209 SQ Associations: Not Supported 00:09:29.209 UUID List: Not Supported 00:09:29.209 Multi-Domain Subsystem: Not Supported 00:09:29.209 Fixed Capacity Management: Not Supported 00:09:29.209 Variable Capacity Management: Not Supported 00:09:29.209 Delete Endurance Group: Not Supported 00:09:29.209 Delete NVM Set: Not Supported 00:09:29.209 Extended LBA Formats Supported: Supported 00:09:29.209 Flexible Data Placement Supported: Supported 00:09:29.209 00:09:29.209 Controller Memory Buffer Support 00:09:29.209 ================================ 00:09:29.209 Supported: No 00:09:29.209 00:09:29.209 Persistent Memory Region Support 00:09:29.209 ================================ 00:09:29.209 Supported: No 00:09:29.209 00:09:29.209 Admin Command Set Attributes 00:09:29.209 ============================ 00:09:29.209 Security Send/Receive: Not Supported 00:09:29.209 Format NVM: Supported 00:09:29.209 Firmware Activate/Download: Not Supported 00:09:29.209 Namespace Management: Supported 00:09:29.209 Device Self-Test: Not Supported 00:09:29.209 Directives: Supported 00:09:29.209 NVMe-MI: Not Supported 00:09:29.209 Virtualization Management: Not Supported 00:09:29.209 Doorbell Buffer Config: Supported 00:09:29.209 Get LBA Status Capability: Not Supported 00:09:29.209 Command & Feature Lockdown Capability: Not Supported 00:09:29.209 Abort Command Limit: 4 00:09:29.209 Async Event Request Limit: 4 00:09:29.209 Number of Firmware Slots: N/A 00:09:29.209 Firmware Slot 1 Read-Only: N/A 00:09:29.209 Firmware Activation Without Reset: N/A 00:09:29.209 Multiple Update Detection Support: N/A 00:09:29.209 Firmware Update Granularity: No Information Provided 00:09:29.209 Per-Namespace SMART Log: Yes 00:09:29.209 Asymmetric Namespace Access Log Page: Not Supported 00:09:29.209 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:29.209 Command Effects Log Page: Supported 00:09:29.209 Get Log Page Extended Data: Supported 00:09:29.209 Telemetry Log Pages: Not Supported 00:09:29.209 Persistent Event Log Pages: Not Supported 00:09:29.209 Supported Log Pages Log Page: May Support 00:09:29.209 Commands Supported & Effects Log Page: Not Supported 00:09:29.209 Feature Identifiers & Effects Log Page:May Support 00:09:29.209 NVMe-MI Commands & Effects Log Page: May Support 00:09:29.209 Data Area 4 for Telemetry Log: Not Supported 00:09:29.209 Error Log Page Entries Supported: 1 00:09:29.209 Keep Alive: Not Supported 00:09:29.209 00:09:29.209 NVM Command Set Attributes 00:09:29.209 ========================== 00:09:29.209 Submission Queue Entry Size 00:09:29.209 Max: 64 00:09:29.209 Min: 64 00:09:29.209 Completion Queue Entry Size 00:09:29.209 Max: 16 00:09:29.209 Min: 16 00:09:29.209 Number of Namespaces: 256 00:09:29.209 Compare Command: Supported 00:09:29.209 Write Uncorrectable Command: Not Supported 00:09:29.209 Dataset Management Command: Supported 00:09:29.209 Write Zeroes Command: Supported 00:09:29.209 Set Features Save Field: Supported 00:09:29.209 Reservations: Not Supported 00:09:29.209 Timestamp: Supported 00:09:29.209 Copy: Supported 00:09:29.209 Volatile Write Cache: Present 00:09:29.209 Atomic Write Unit (Normal): 1 00:09:29.209 Atomic Write Unit (PFail): 1 00:09:29.209 Atomic Compare & Write Unit: 1 00:09:29.209 Fused Compare & Write: Not Supported 00:09:29.209 Scatter-Gather List 00:09:29.209 SGL Command Set: Supported 00:09:29.209 SGL Keyed: Not Supported 00:09:29.209 SGL Bit Bucket Descriptor: Not Supported 00:09:29.209 SGL Metadata Pointer: Not Supported 00:09:29.209 Oversized SGL: Not Supported 00:09:29.209 SGL Metadata Address: Not Supported 00:09:29.209 SGL Offset: Not Supported 00:09:29.209 Transport SGL Data Block: Not Supported 00:09:29.209 Replay Protected Memory Block: Not Supported 00:09:29.209 00:09:29.209 Firmware Slot Information 00:09:29.209 ========================= 00:09:29.209 Active slot: 1 00:09:29.209 Slot 1 Firmware Revision: 1.0 00:09:29.209 00:09:29.209 00:09:29.209 Commands Supported and Effects 00:09:29.209 ============================== 00:09:29.209 Admin Commands 00:09:29.209 -------------- 00:09:29.209 Delete I/O Submission Queue (00h): Supported 00:09:29.209 Create I/O Submission Queue (01h): Supported 00:09:29.209 Get Log Page (02h): Supported 00:09:29.209 Delete I/O Completion Queue (04h): Supported 00:09:29.209 Create I/O Completion Queue (05h): Supported 00:09:29.209 Identify (06h): Supported 00:09:29.209 Abort (08h): Supported 00:09:29.209 Set Features (09h): Supported 00:09:29.209 Get Features (0Ah): Supported 00:09:29.209 Asynchronous Event Request (0Ch): Supported 00:09:29.209 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:29.209 Directive Send (19h): Supported 00:09:29.209 Directive Receive (1Ah): Supported 00:09:29.209 Virtualization Management (1Ch): Supported 00:09:29.209 Doorbell Buffer Config (7Ch): Supported 00:09:29.209 Format NVM (80h): Supported LBA-Change 00:09:29.209 I/O Commands 00:09:29.209 ------------ 00:09:29.209 Flush (00h): Supported LBA-Change 00:09:29.209 Write (01h): Supported LBA-Change 00:09:29.209 Read (02h): Supported 00:09:29.209 Compare (05h): Supported 00:09:29.209 Write Zeroes (08h): Supported LBA-Change 00:09:29.209 Dataset Management (09h): Supported LBA-Change 00:09:29.209 Unknown (0Ch): Supported 00:09:29.209 Unknown (12h): Supported 00:09:29.209 Copy (19h): Supported LBA-Change 00:09:29.209 Unknown (1Dh): Supported LBA-Change 00:09:29.209 00:09:29.209 Error Log 00:09:29.209 ========= 00:09:29.209 00:09:29.209 Arbitration 00:09:29.209 =========== 00:09:29.209 Arbitration Burst: no limit 00:09:29.209 00:09:29.209 Power Management 00:09:29.209 ================ 00:09:29.209 Number of Power States: 1 00:09:29.209 Current Power State: Power State #0 00:09:29.209 Power State #0: 00:09:29.209 Max Power: 25.00 W 00:09:29.209 Non-Operational State: Operational 00:09:29.209 Entry Latency: 16 microseconds 00:09:29.209 Exit Latency: 4 microseconds 00:09:29.209 Relative Read Throughput: 0 00:09:29.209 Relative Read Latency: 0 00:09:29.209 Relative Write Throughput: 0 00:09:29.209 Relative Write Latency: 0 00:09:29.209 Idle Power: Not Reported 00:09:29.209 Active Power: Not Reported 00:09:29.209 Non-Operational Permissive Mode: Not Supported 00:09:29.209 00:09:29.209 Health Information 00:09:29.209 ================== 00:09:29.209 Critical Warnings: 00:09:29.209 Available Spare Space: OK 00:09:29.209 Temperature: OK 00:09:29.209 Device Reliability: OK 00:09:29.209 Read Only: No 00:09:29.209 Volatile Memory Backup: OK 00:09:29.209 Current Temperature: 323 Kelvin (50 Celsius) 00:09:29.209 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:29.209 Available Spare: 0% 00:09:29.209 Available Spare Threshold: 0% 00:09:29.209 Life Percentage Used: 0% 00:09:29.209 Data Units Read: 841 00:09:29.209 Data Units Written: 770 00:09:29.209 Host Read Commands: 36471 00:09:29.209 Host Write Commands: 35894 00:09:29.209 Controller Busy Time: 0 minutes 00:09:29.209 Power Cycles: 0 00:09:29.209 Power On Hours: 0 hours 00:09:29.209 Unsafe Shutdowns: 0 00:09:29.209 Unrecoverable Media Errors: 0 00:09:29.209 Lifetime Error Log Entries: 0 00:09:29.209 Warning Temperature Time: 0 minutes 00:09:29.209 Critical Temperature Time: 0 minutes 00:09:29.209 00:09:29.209 Number of Queues 00:09:29.209 ================ 00:09:29.209 Number of I/O Submission Queues: 64 00:09:29.209 Number of I/O Completion Queues: 64 00:09:29.209 00:09:29.209 ZNS Specific Controller Data 00:09:29.209 ============================ 00:09:29.209 Zone Append Size Limit: 0 00:09:29.209 00:09:29.209 00:09:29.209 Active Namespaces 00:09:29.209 ================= 00:09:29.209 Namespace ID:1 00:09:29.209 Error Recovery Timeout: Unlimited 00:09:29.209 Command Set Identifier: NVM (00h) 00:09:29.210 Deallocate: Supported 00:09:29.210 Deallocated/Unwritten Error: Supported 00:09:29.210 Deallocated Read Value: All 0x00 00:09:29.210 Deallocate in Write Zeroes: Not Supported 00:09:29.210 Deallocated Guard Field: 0xFFFF 00:09:29.210 Flush: Supported 00:09:29.210 Reservation: Not Supported 00:09:29.210 Namespace Sharing Capabilities: Multiple Controllers 00:09:29.210 Size (in LBAs): 262144 (1GiB) 00:09:29.210 Capacity (in LBAs): 262144 (1GiB) 00:09:29.210 Utilization (in LBAs): 262144 (1GiB) 00:09:29.210 Thin Provisioning: Not Supported 00:09:29.210 Per-NS Atomic Units: No 00:09:29.210 Maximum Single Source Range Length: 128 00:09:29.210 Maximum Copy Length: 128 00:09:29.210 Maximum Source Range Count: 128 00:09:29.210 NGUID/EUI64 Never Reused: No 00:09:29.210 Namespace Write Protected: No 00:09:29.210 Endurance group ID: 1 00:09:29.210 Number of LBA Formats: 8 00:09:29.210 Current LBA Format: LBA Format #04 00:09:29.210 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:29.210 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:29.210 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:29.210 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:29.210 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:29.210 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:29.210 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:29.210 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:29.210 00:09:29.210 Get Feature FDP: 00:09:29.210 ================ 00:09:29.210 Enabled: Yes 00:09:29.210 FDP configuration index: 0 00:09:29.210 00:09:29.210 FDP configurations log page 00:09:29.210 =========================== 00:09:29.210 Number of FDP configurations: 1 00:09:29.210 Version: 0 00:09:29.210 Size: 112 00:09:29.210 FDP Configuration Descriptor: 0 00:09:29.210 Descriptor Size: 96 00:09:29.210 Reclaim Group Identifier format: 2 00:09:29.210 FDP Volatile Write Cache: Not Present 00:09:29.210 FDP Configuration: Valid 00:09:29.210 Vendor Specific Size: 0 00:09:29.210 Number of Reclaim Groups: 2 00:09:29.210 Number of Recalim Unit Handles: 8 00:09:29.210 Max Placement Identifiers: 128 00:09:29.210 Number of Namespaces Suppprted: 256 00:09:29.210 Reclaim unit Nominal Size: 6000000 bytes 00:09:29.210 Estimated Reclaim Unit Time Limit: Not Reported 00:09:29.210 RUH Desc #000: RUH Type: Initially Isolated 00:09:29.210 RUH Desc #001: RUH Type: Initially Isolated 00:09:29.210 RUH Desc #002: RUH Type: Initially Isolated 00:09:29.210 RUH Desc #003: RUH Type: Initially Isolated 00:09:29.210 RUH Desc #004: RUH Type: Initially Isolated 00:09:29.210 RUH Desc #005: RUH Type: Initially Isolated 00:09:29.210 RUH Desc #006: RUH Type: Initially Isolated 00:09:29.210 RUH Desc #007: RUH Type: Initially Isolated 00:09:29.210 00:09:29.210 FDP reclaim unit handle usage log page 00:09:29.210 ====================================== 00:09:29.210 Number of Reclaim Unit Handles: 8 00:09:29.210 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:29.210 RUH Usage Desc #001: RUH Attributes: Unused 00:09:29.210 RUH Usage Desc #002: RUH Attributes: Unused 00:09:29.210 RUH Usage Desc #003: RUH Attributes: Unused 00:09:29.210 RUH Usage Desc #004: RUH Attributes: Unused 00:09:29.210 RUH Usage Desc #005: RUH Attributes: Unused 00:09:29.210 RUH Usage Desc #006: RUH Attributes: Unused 00:09:29.210 RUH Usage Desc #007: RUH Attributes: Unused 00:09:29.210 00:09:29.210 FDP statistics log page 00:09:29.210 ======================= 00:09:29.210 Host bytes with metadata written: 489398272 00:09:29.210 Medi[2024-11-20 15:02:30.004592] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 64268 terminated unexpected 00:09:29.210 a bytes with metadata written: 489451520 00:09:29.210 Media bytes erased: 0 00:09:29.210 00:09:29.210 FDP events log page 00:09:29.210 =================== 00:09:29.210 Number of FDP events: 0 00:09:29.210 00:09:29.210 NVM Specific Namespace Data 00:09:29.210 =========================== 00:09:29.210 Logical Block Storage Tag Mask: 0 00:09:29.210 Protection Information Capabilities: 00:09:29.210 16b Guard Protection Information Storage Tag Support: No 00:09:29.210 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:29.210 Storage Tag Check Read Support: No 00:09:29.210 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.210 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.210 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.210 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.210 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.210 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.210 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.210 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.210 ===================================================== 00:09:29.210 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:29.210 ===================================================== 00:09:29.210 Controller Capabilities/Features 00:09:29.210 ================================ 00:09:29.210 Vendor ID: 1b36 00:09:29.210 Subsystem Vendor ID: 1af4 00:09:29.210 Serial Number: 12342 00:09:29.210 Model Number: QEMU NVMe Ctrl 00:09:29.210 Firmware Version: 8.0.0 00:09:29.210 Recommended Arb Burst: 6 00:09:29.210 IEEE OUI Identifier: 00 54 52 00:09:29.210 Multi-path I/O 00:09:29.210 May have multiple subsystem ports: No 00:09:29.210 May have multiple controllers: No 00:09:29.210 Associated with SR-IOV VF: No 00:09:29.210 Max Data Transfer Size: 524288 00:09:29.210 Max Number of Namespaces: 256 00:09:29.210 Max Number of I/O Queues: 64 00:09:29.210 NVMe Specification Version (VS): 1.4 00:09:29.210 NVMe Specification Version (Identify): 1.4 00:09:29.210 Maximum Queue Entries: 2048 00:09:29.210 Contiguous Queues Required: Yes 00:09:29.210 Arbitration Mechanisms Supported 00:09:29.210 Weighted Round Robin: Not Supported 00:09:29.210 Vendor Specific: Not Supported 00:09:29.210 Reset Timeout: 7500 ms 00:09:29.210 Doorbell Stride: 4 bytes 00:09:29.210 NVM Subsystem Reset: Not Supported 00:09:29.210 Command Sets Supported 00:09:29.210 NVM Command Set: Supported 00:09:29.210 Boot Partition: Not Supported 00:09:29.210 Memory Page Size Minimum: 4096 bytes 00:09:29.210 Memory Page Size Maximum: 65536 bytes 00:09:29.210 Persistent Memory Region: Not Supported 00:09:29.210 Optional Asynchronous Events Supported 00:09:29.210 Namespace Attribute Notices: Supported 00:09:29.210 Firmware Activation Notices: Not Supported 00:09:29.210 ANA Change Notices: Not Supported 00:09:29.210 PLE Aggregate Log Change Notices: Not Supported 00:09:29.210 LBA Status Info Alert Notices: Not Supported 00:09:29.210 EGE Aggregate Log Change Notices: Not Supported 00:09:29.210 Normal NVM Subsystem Shutdown event: Not Supported 00:09:29.210 Zone Descriptor Change Notices: Not Supported 00:09:29.210 Discovery Log Change Notices: Not Supported 00:09:29.210 Controller Attributes 00:09:29.210 128-bit Host Identifier: Not Supported 00:09:29.210 Non-Operational Permissive Mode: Not Supported 00:09:29.210 NVM Sets: Not Supported 00:09:29.210 Read Recovery Levels: Not Supported 00:09:29.210 Endurance Groups: Not Supported 00:09:29.210 Predictable Latency Mode: Not Supported 00:09:29.210 Traffic Based Keep ALive: Not Supported 00:09:29.210 Namespace Granularity: Not Supported 00:09:29.210 SQ Associations: Not Supported 00:09:29.211 UUID List: Not Supported 00:09:29.211 Multi-Domain Subsystem: Not Supported 00:09:29.211 Fixed Capacity Management: Not Supported 00:09:29.211 Variable Capacity Management: Not Supported 00:09:29.211 Delete Endurance Group: Not Supported 00:09:29.211 Delete NVM Set: Not Supported 00:09:29.211 Extended LBA Formats Supported: Supported 00:09:29.211 Flexible Data Placement Supported: Not Supported 00:09:29.211 00:09:29.211 Controller Memory Buffer Support 00:09:29.211 ================================ 00:09:29.211 Supported: No 00:09:29.211 00:09:29.211 Persistent Memory Region Support 00:09:29.211 ================================ 00:09:29.211 Supported: No 00:09:29.211 00:09:29.211 Admin Command Set Attributes 00:09:29.211 ============================ 00:09:29.211 Security Send/Receive: Not Supported 00:09:29.211 Format NVM: Supported 00:09:29.211 Firmware Activate/Download: Not Supported 00:09:29.211 Namespace Management: Supported 00:09:29.211 Device Self-Test: Not Supported 00:09:29.211 Directives: Supported 00:09:29.211 NVMe-MI: Not Supported 00:09:29.211 Virtualization Management: Not Supported 00:09:29.211 Doorbell Buffer Config: Supported 00:09:29.211 Get LBA Status Capability: Not Supported 00:09:29.211 Command & Feature Lockdown Capability: Not Supported 00:09:29.211 Abort Command Limit: 4 00:09:29.211 Async Event Request Limit: 4 00:09:29.211 Number of Firmware Slots: N/A 00:09:29.211 Firmware Slot 1 Read-Only: N/A 00:09:29.211 Firmware Activation Without Reset: N/A 00:09:29.211 Multiple Update Detection Support: N/A 00:09:29.211 Firmware Update Granularity: No Information Provided 00:09:29.211 Per-Namespace SMART Log: Yes 00:09:29.211 Asymmetric Namespace Access Log Page: Not Supported 00:09:29.211 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:29.211 Command Effects Log Page: Supported 00:09:29.211 Get Log Page Extended Data: Supported 00:09:29.211 Telemetry Log Pages: Not Supported 00:09:29.211 Persistent Event Log Pages: Not Supported 00:09:29.211 Supported Log Pages Log Page: May Support 00:09:29.211 Commands Supported & Effects Log Page: Not Supported 00:09:29.211 Feature Identifiers & Effects Log Page:May Support 00:09:29.211 NVMe-MI Commands & Effects Log Page: May Support 00:09:29.211 Data Area 4 for Telemetry Log: Not Supported 00:09:29.211 Error Log Page Entries Supported: 1 00:09:29.211 Keep Alive: Not Supported 00:09:29.211 00:09:29.211 NVM Command Set Attributes 00:09:29.211 ========================== 00:09:29.211 Submission Queue Entry Size 00:09:29.211 Max: 64 00:09:29.211 Min: 64 00:09:29.211 Completion Queue Entry Size 00:09:29.211 Max: 16 00:09:29.211 Min: 16 00:09:29.211 Number of Namespaces: 256 00:09:29.211 Compare Command: Supported 00:09:29.211 Write Uncorrectable Command: Not Supported 00:09:29.211 Dataset Management Command: Supported 00:09:29.211 Write Zeroes Command: Supported 00:09:29.211 Set Features Save Field: Supported 00:09:29.211 Reservations: Not Supported 00:09:29.211 Timestamp: Supported 00:09:29.211 Copy: Supported 00:09:29.211 Volatile Write Cache: Present 00:09:29.211 Atomic Write Unit (Normal): 1 00:09:29.211 Atomic Write Unit (PFail): 1 00:09:29.211 Atomic Compare & Write Unit: 1 00:09:29.211 Fused Compare & Write: Not Supported 00:09:29.211 Scatter-Gather List 00:09:29.211 SGL Command Set: Supported 00:09:29.211 SGL Keyed: Not Supported 00:09:29.211 SGL Bit Bucket Descriptor: Not Supported 00:09:29.211 SGL Metadata Pointer: Not Supported 00:09:29.211 Oversized SGL: Not Supported 00:09:29.211 SGL Metadata Address: Not Supported 00:09:29.211 SGL Offset: Not Supported 00:09:29.211 Transport SGL Data Block: Not Supported 00:09:29.211 Replay Protected Memory Block: Not Supported 00:09:29.211 00:09:29.211 Firmware Slot Information 00:09:29.211 ========================= 00:09:29.211 Active slot: 1 00:09:29.211 Slot 1 Firmware Revision: 1.0 00:09:29.211 00:09:29.211 00:09:29.211 Commands Supported and Effects 00:09:29.211 ============================== 00:09:29.211 Admin Commands 00:09:29.211 -------------- 00:09:29.211 Delete I/O Submission Queue (00h): Supported 00:09:29.211 Create I/O Submission Queue (01h): Supported 00:09:29.211 Get Log Page (02h): Supported 00:09:29.211 Delete I/O Completion Queue (04h): Supported 00:09:29.211 Create I/O Completion Queue (05h): Supported 00:09:29.211 Identify (06h): Supported 00:09:29.211 Abort (08h): Supported 00:09:29.211 Set Features (09h): Supported 00:09:29.211 Get Features (0Ah): Supported 00:09:29.211 Asynchronous Event Request (0Ch): Supported 00:09:29.211 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:29.211 Directive Send (19h): Supported 00:09:29.211 Directive Receive (1Ah): Supported 00:09:29.211 Virtualization Management (1Ch): Supported 00:09:29.211 Doorbell Buffer Config (7Ch): Supported 00:09:29.211 Format NVM (80h): Supported LBA-Change 00:09:29.211 I/O Commands 00:09:29.211 ------------ 00:09:29.211 Flush (00h): Supported LBA-Change 00:09:29.211 Write (01h): Supported LBA-Change 00:09:29.211 Read (02h): Supported 00:09:29.211 Compare (05h): Supported 00:09:29.211 Write Zeroes (08h): Supported LBA-Change 00:09:29.211 Dataset Management (09h): Supported LBA-Change 00:09:29.211 Unknown (0Ch): Supported 00:09:29.211 Unknown (12h): Supported 00:09:29.211 Copy (19h): Supported LBA-Change 00:09:29.211 Unknown (1Dh): Supported LBA-Change 00:09:29.211 00:09:29.211 Error Log 00:09:29.211 ========= 00:09:29.211 00:09:29.211 Arbitration 00:09:29.211 =========== 00:09:29.211 Arbitration Burst: no limit 00:09:29.211 00:09:29.211 Power Management 00:09:29.211 ================ 00:09:29.211 Number of Power States: 1 00:09:29.211 Current Power State: Power State #0 00:09:29.211 Power State #0: 00:09:29.211 Max Power: 25.00 W 00:09:29.211 Non-Operational State: Operational 00:09:29.211 Entry Latency: 16 microseconds 00:09:29.211 Exit Latency: 4 microseconds 00:09:29.211 Relative Read Throughput: 0 00:09:29.211 Relative Read Latency: 0 00:09:29.211 Relative Write Throughput: 0 00:09:29.211 Relative Write Latency: 0 00:09:29.211 Idle Power: Not Reported 00:09:29.211 Active Power: Not Reported 00:09:29.211 Non-Operational Permissive Mode: Not Supported 00:09:29.211 00:09:29.211 Health Information 00:09:29.211 ================== 00:09:29.211 Critical Warnings: 00:09:29.211 Available Spare Space: OK 00:09:29.211 Temperature: OK 00:09:29.211 Device Reliability: OK 00:09:29.211 Read Only: No 00:09:29.211 Volatile Memory Backup: OK 00:09:29.211 Current Temperature: 323 Kelvin (50 Celsius) 00:09:29.211 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:29.211 Available Spare: 0% 00:09:29.211 Available Spare Threshold: 0% 00:09:29.211 Life Percentage Used: 0% 00:09:29.211 Data Units Read: 2333 00:09:29.211 Data Units Written: 2120 00:09:29.211 Host Read Commands: 107927 00:09:29.211 Host Write Commands: 106196 00:09:29.211 Controller Busy Time: 0 minutes 00:09:29.211 Power Cycles: 0 00:09:29.211 Power On Hours: 0 hours 00:09:29.211 Unsafe Shutdowns: 0 00:09:29.211 Unrecoverable Media Errors: 0 00:09:29.211 Lifetime Error Log Entries: 0 00:09:29.211 Warning Temperature Time: 0 minutes 00:09:29.211 Critical Temperature Time: 0 minutes 00:09:29.211 00:09:29.211 Number of Queues 00:09:29.211 ================ 00:09:29.211 Number of I/O Submission Queues: 64 00:09:29.211 Number of I/O Completion Queues: 64 00:09:29.211 00:09:29.211 ZNS Specific Controller Data 00:09:29.211 ============================ 00:09:29.211 Zone Append Size Limit: 0 00:09:29.211 00:09:29.211 00:09:29.211 Active Namespaces 00:09:29.211 ================= 00:09:29.211 Namespace ID:1 00:09:29.211 Error Recovery Timeout: Unlimited 00:09:29.211 Command Set Identifier: NVM (00h) 00:09:29.211 Deallocate: Supported 00:09:29.211 Deallocated/Unwritten Error: Supported 00:09:29.211 Deallocated Read Value: All 0x00 00:09:29.211 Deallocate in Write Zeroes: Not Supported 00:09:29.211 Deallocated Guard Field: 0xFFFF 00:09:29.211 Flush: Supported 00:09:29.211 Reservation: Not Supported 00:09:29.211 Namespace Sharing Capabilities: Private 00:09:29.211 Size (in LBAs): 1048576 (4GiB) 00:09:29.211 Capacity (in LBAs): 1048576 (4GiB) 00:09:29.211 Utilization (in LBAs): 1048576 (4GiB) 00:09:29.211 Thin Provisioning: Not Supported 00:09:29.211 Per-NS Atomic Units: No 00:09:29.211 Maximum Single Source Range Length: 128 00:09:29.211 Maximum Copy Length: 128 00:09:29.211 Maximum Source Range Count: 128 00:09:29.211 NGUID/EUI64 Never Reused: No 00:09:29.211 Namespace Write Protected: No 00:09:29.211 Number of LBA Formats: 8 00:09:29.212 Current LBA Format: LBA Format #04 00:09:29.212 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:29.212 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:29.212 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:29.212 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:29.212 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:29.212 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:29.212 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:29.212 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:29.212 00:09:29.212 NVM Specific Namespace Data 00:09:29.212 =========================== 00:09:29.212 Logical Block Storage Tag Mask: 0 00:09:29.212 Protection Information Capabilities: 00:09:29.212 16b Guard Protection Information Storage Tag Support: No 00:09:29.212 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:29.212 Storage Tag Check Read Support: No 00:09:29.212 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.212 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.212 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.212 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.212 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.212 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.212 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.212 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.212 Namespace ID:2 00:09:29.212 Error Recovery Timeout: Unlimited 00:09:29.212 Command Set Identifier: NVM (00h) 00:09:29.212 Deallocate: Supported 00:09:29.212 Deallocated/Unwritten Error: Supported 00:09:29.212 Deallocated Read Value: All 0x00 00:09:29.212 Deallocate in Write Zeroes: Not Supported 00:09:29.212 Deallocated Guard Field: 0xFFFF 00:09:29.212 Flush: Supported 00:09:29.212 Reservation: Not Supported 00:09:29.212 Namespace Sharing Capabilities: Private 00:09:29.212 Size (in LBAs): 1048576 (4GiB) 00:09:29.212 Capacity (in LBAs): 1048576 (4GiB) 00:09:29.212 Utilization (in LBAs): 1048576 (4GiB) 00:09:29.212 Thin Provisioning: Not Supported 00:09:29.212 Per-NS Atomic Units: No 00:09:29.212 Maximum Single Source Range Length: 128 00:09:29.212 Maximum Copy Length: 128 00:09:29.212 Maximum Source Range Count: 128 00:09:29.212 NGUID/EUI64 Never Reused: No 00:09:29.212 Namespace Write Protected: No 00:09:29.212 Number of LBA Formats: 8 00:09:29.212 Current LBA Format: LBA Format #04 00:09:29.212 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:29.212 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:29.212 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:29.212 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:29.212 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:29.212 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:29.212 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:29.212 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:29.212 00:09:29.212 NVM Specific Namespace Data 00:09:29.212 =========================== 00:09:29.212 Logical Block Storage Tag Mask: 0 00:09:29.212 Protection Information Capabilities: 00:09:29.212 16b Guard Protection Information Storage Tag Support: No 00:09:29.212 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:29.212 Storage Tag Check Read Support: No 00:09:29.212 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.212 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.212 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.212 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.212 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.212 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.212 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.212 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.212 Namespace ID:3 00:09:29.212 Error Recovery Timeout: Unlimited 00:09:29.212 Command Set Identifier: NVM (00h) 00:09:29.212 Deallocate: Supported 00:09:29.212 Deallocated/Unwritten Error: Supported 00:09:29.212 Deallocated Read Value: All 0x00 00:09:29.212 Deallocate in Write Zeroes: Not Supported 00:09:29.212 Deallocated Guard Field: 0xFFFF 00:09:29.212 Flush: Supported 00:09:29.212 Reservation: Not Supported 00:09:29.212 Namespace Sharing Capabilities: Private 00:09:29.212 Size (in LBAs): 1048576 (4GiB) 00:09:29.472 Capacity (in LBAs): 1048576 (4GiB) 00:09:29.472 Utilization (in LBAs): 1048576 (4GiB) 00:09:29.472 Thin Provisioning: Not Supported 00:09:29.472 Per-NS Atomic Units: No 00:09:29.472 Maximum Single Source Range Length: 128 00:09:29.472 Maximum Copy Length: 128 00:09:29.472 Maximum Source Range Count: 128 00:09:29.472 NGUID/EUI64 Never Reused: No 00:09:29.472 Namespace Write Protected: No 00:09:29.472 Number of LBA Formats: 8 00:09:29.472 Current LBA Format: LBA Format #04 00:09:29.472 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:29.472 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:29.472 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:29.472 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:29.472 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:29.472 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:29.472 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:29.472 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:29.472 00:09:29.472 NVM Specific Namespace Data 00:09:29.472 =========================== 00:09:29.472 Logical Block Storage Tag Mask: 0 00:09:29.472 Protection Information Capabilities: 00:09:29.472 16b Guard Protection Information Storage Tag Support: No 00:09:29.472 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:29.472 Storage Tag Check Read Support: No 00:09:29.472 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.472 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.472 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.472 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.472 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.472 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.472 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.472 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.472 15:02:30 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:29.472 15:02:30 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:09:29.732 ===================================================== 00:09:29.732 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:29.732 ===================================================== 00:09:29.732 Controller Capabilities/Features 00:09:29.732 ================================ 00:09:29.732 Vendor ID: 1b36 00:09:29.732 Subsystem Vendor ID: 1af4 00:09:29.732 Serial Number: 12340 00:09:29.732 Model Number: QEMU NVMe Ctrl 00:09:29.732 Firmware Version: 8.0.0 00:09:29.732 Recommended Arb Burst: 6 00:09:29.732 IEEE OUI Identifier: 00 54 52 00:09:29.732 Multi-path I/O 00:09:29.732 May have multiple subsystem ports: No 00:09:29.732 May have multiple controllers: No 00:09:29.732 Associated with SR-IOV VF: No 00:09:29.732 Max Data Transfer Size: 524288 00:09:29.732 Max Number of Namespaces: 256 00:09:29.732 Max Number of I/O Queues: 64 00:09:29.732 NVMe Specification Version (VS): 1.4 00:09:29.732 NVMe Specification Version (Identify): 1.4 00:09:29.732 Maximum Queue Entries: 2048 00:09:29.732 Contiguous Queues Required: Yes 00:09:29.732 Arbitration Mechanisms Supported 00:09:29.732 Weighted Round Robin: Not Supported 00:09:29.732 Vendor Specific: Not Supported 00:09:29.732 Reset Timeout: 7500 ms 00:09:29.732 Doorbell Stride: 4 bytes 00:09:29.732 NVM Subsystem Reset: Not Supported 00:09:29.732 Command Sets Supported 00:09:29.732 NVM Command Set: Supported 00:09:29.732 Boot Partition: Not Supported 00:09:29.732 Memory Page Size Minimum: 4096 bytes 00:09:29.732 Memory Page Size Maximum: 65536 bytes 00:09:29.732 Persistent Memory Region: Not Supported 00:09:29.732 Optional Asynchronous Events Supported 00:09:29.732 Namespace Attribute Notices: Supported 00:09:29.732 Firmware Activation Notices: Not Supported 00:09:29.732 ANA Change Notices: Not Supported 00:09:29.732 PLE Aggregate Log Change Notices: Not Supported 00:09:29.732 LBA Status Info Alert Notices: Not Supported 00:09:29.732 EGE Aggregate Log Change Notices: Not Supported 00:09:29.732 Normal NVM Subsystem Shutdown event: Not Supported 00:09:29.732 Zone Descriptor Change Notices: Not Supported 00:09:29.732 Discovery Log Change Notices: Not Supported 00:09:29.732 Controller Attributes 00:09:29.732 128-bit Host Identifier: Not Supported 00:09:29.732 Non-Operational Permissive Mode: Not Supported 00:09:29.732 NVM Sets: Not Supported 00:09:29.732 Read Recovery Levels: Not Supported 00:09:29.732 Endurance Groups: Not Supported 00:09:29.732 Predictable Latency Mode: Not Supported 00:09:29.732 Traffic Based Keep ALive: Not Supported 00:09:29.732 Namespace Granularity: Not Supported 00:09:29.732 SQ Associations: Not Supported 00:09:29.732 UUID List: Not Supported 00:09:29.732 Multi-Domain Subsystem: Not Supported 00:09:29.732 Fixed Capacity Management: Not Supported 00:09:29.732 Variable Capacity Management: Not Supported 00:09:29.732 Delete Endurance Group: Not Supported 00:09:29.732 Delete NVM Set: Not Supported 00:09:29.732 Extended LBA Formats Supported: Supported 00:09:29.732 Flexible Data Placement Supported: Not Supported 00:09:29.732 00:09:29.732 Controller Memory Buffer Support 00:09:29.732 ================================ 00:09:29.732 Supported: No 00:09:29.732 00:09:29.732 Persistent Memory Region Support 00:09:29.732 ================================ 00:09:29.732 Supported: No 00:09:29.732 00:09:29.733 Admin Command Set Attributes 00:09:29.733 ============================ 00:09:29.733 Security Send/Receive: Not Supported 00:09:29.733 Format NVM: Supported 00:09:29.733 Firmware Activate/Download: Not Supported 00:09:29.733 Namespace Management: Supported 00:09:29.733 Device Self-Test: Not Supported 00:09:29.733 Directives: Supported 00:09:29.733 NVMe-MI: Not Supported 00:09:29.733 Virtualization Management: Not Supported 00:09:29.733 Doorbell Buffer Config: Supported 00:09:29.733 Get LBA Status Capability: Not Supported 00:09:29.733 Command & Feature Lockdown Capability: Not Supported 00:09:29.733 Abort Command Limit: 4 00:09:29.733 Async Event Request Limit: 4 00:09:29.733 Number of Firmware Slots: N/A 00:09:29.733 Firmware Slot 1 Read-Only: N/A 00:09:29.733 Firmware Activation Without Reset: N/A 00:09:29.733 Multiple Update Detection Support: N/A 00:09:29.733 Firmware Update Granularity: No Information Provided 00:09:29.733 Per-Namespace SMART Log: Yes 00:09:29.733 Asymmetric Namespace Access Log Page: Not Supported 00:09:29.733 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:29.733 Command Effects Log Page: Supported 00:09:29.733 Get Log Page Extended Data: Supported 00:09:29.733 Telemetry Log Pages: Not Supported 00:09:29.733 Persistent Event Log Pages: Not Supported 00:09:29.733 Supported Log Pages Log Page: May Support 00:09:29.733 Commands Supported & Effects Log Page: Not Supported 00:09:29.733 Feature Identifiers & Effects Log Page:May Support 00:09:29.733 NVMe-MI Commands & Effects Log Page: May Support 00:09:29.733 Data Area 4 for Telemetry Log: Not Supported 00:09:29.733 Error Log Page Entries Supported: 1 00:09:29.733 Keep Alive: Not Supported 00:09:29.733 00:09:29.733 NVM Command Set Attributes 00:09:29.733 ========================== 00:09:29.733 Submission Queue Entry Size 00:09:29.733 Max: 64 00:09:29.733 Min: 64 00:09:29.733 Completion Queue Entry Size 00:09:29.733 Max: 16 00:09:29.733 Min: 16 00:09:29.733 Number of Namespaces: 256 00:09:29.733 Compare Command: Supported 00:09:29.733 Write Uncorrectable Command: Not Supported 00:09:29.733 Dataset Management Command: Supported 00:09:29.733 Write Zeroes Command: Supported 00:09:29.733 Set Features Save Field: Supported 00:09:29.733 Reservations: Not Supported 00:09:29.733 Timestamp: Supported 00:09:29.733 Copy: Supported 00:09:29.733 Volatile Write Cache: Present 00:09:29.733 Atomic Write Unit (Normal): 1 00:09:29.733 Atomic Write Unit (PFail): 1 00:09:29.733 Atomic Compare & Write Unit: 1 00:09:29.733 Fused Compare & Write: Not Supported 00:09:29.733 Scatter-Gather List 00:09:29.733 SGL Command Set: Supported 00:09:29.733 SGL Keyed: Not Supported 00:09:29.733 SGL Bit Bucket Descriptor: Not Supported 00:09:29.733 SGL Metadata Pointer: Not Supported 00:09:29.733 Oversized SGL: Not Supported 00:09:29.733 SGL Metadata Address: Not Supported 00:09:29.733 SGL Offset: Not Supported 00:09:29.733 Transport SGL Data Block: Not Supported 00:09:29.733 Replay Protected Memory Block: Not Supported 00:09:29.733 00:09:29.733 Firmware Slot Information 00:09:29.733 ========================= 00:09:29.733 Active slot: 1 00:09:29.733 Slot 1 Firmware Revision: 1.0 00:09:29.733 00:09:29.733 00:09:29.733 Commands Supported and Effects 00:09:29.733 ============================== 00:09:29.733 Admin Commands 00:09:29.733 -------------- 00:09:29.733 Delete I/O Submission Queue (00h): Supported 00:09:29.733 Create I/O Submission Queue (01h): Supported 00:09:29.733 Get Log Page (02h): Supported 00:09:29.733 Delete I/O Completion Queue (04h): Supported 00:09:29.733 Create I/O Completion Queue (05h): Supported 00:09:29.733 Identify (06h): Supported 00:09:29.733 Abort (08h): Supported 00:09:29.733 Set Features (09h): Supported 00:09:29.733 Get Features (0Ah): Supported 00:09:29.733 Asynchronous Event Request (0Ch): Supported 00:09:29.733 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:29.733 Directive Send (19h): Supported 00:09:29.733 Directive Receive (1Ah): Supported 00:09:29.733 Virtualization Management (1Ch): Supported 00:09:29.733 Doorbell Buffer Config (7Ch): Supported 00:09:29.733 Format NVM (80h): Supported LBA-Change 00:09:29.733 I/O Commands 00:09:29.733 ------------ 00:09:29.733 Flush (00h): Supported LBA-Change 00:09:29.733 Write (01h): Supported LBA-Change 00:09:29.733 Read (02h): Supported 00:09:29.733 Compare (05h): Supported 00:09:29.733 Write Zeroes (08h): Supported LBA-Change 00:09:29.733 Dataset Management (09h): Supported LBA-Change 00:09:29.733 Unknown (0Ch): Supported 00:09:29.733 Unknown (12h): Supported 00:09:29.733 Copy (19h): Supported LBA-Change 00:09:29.733 Unknown (1Dh): Supported LBA-Change 00:09:29.733 00:09:29.733 Error Log 00:09:29.733 ========= 00:09:29.733 00:09:29.733 Arbitration 00:09:29.733 =========== 00:09:29.733 Arbitration Burst: no limit 00:09:29.733 00:09:29.733 Power Management 00:09:29.733 ================ 00:09:29.733 Number of Power States: 1 00:09:29.733 Current Power State: Power State #0 00:09:29.733 Power State #0: 00:09:29.733 Max Power: 25.00 W 00:09:29.733 Non-Operational State: Operational 00:09:29.733 Entry Latency: 16 microseconds 00:09:29.733 Exit Latency: 4 microseconds 00:09:29.733 Relative Read Throughput: 0 00:09:29.733 Relative Read Latency: 0 00:09:29.733 Relative Write Throughput: 0 00:09:29.733 Relative Write Latency: 0 00:09:29.733 Idle Power: Not Reported 00:09:29.733 Active Power: Not Reported 00:09:29.733 Non-Operational Permissive Mode: Not Supported 00:09:29.733 00:09:29.733 Health Information 00:09:29.733 ================== 00:09:29.733 Critical Warnings: 00:09:29.733 Available Spare Space: OK 00:09:29.733 Temperature: OK 00:09:29.733 Device Reliability: OK 00:09:29.733 Read Only: No 00:09:29.733 Volatile Memory Backup: OK 00:09:29.733 Current Temperature: 323 Kelvin (50 Celsius) 00:09:29.733 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:29.733 Available Spare: 0% 00:09:29.733 Available Spare Threshold: 0% 00:09:29.733 Life Percentage Used: 0% 00:09:29.733 Data Units Read: 739 00:09:29.733 Data Units Written: 667 00:09:29.733 Host Read Commands: 35372 00:09:29.733 Host Write Commands: 35158 00:09:29.733 Controller Busy Time: 0 minutes 00:09:29.733 Power Cycles: 0 00:09:29.733 Power On Hours: 0 hours 00:09:29.733 Unsafe Shutdowns: 0 00:09:29.733 Unrecoverable Media Errors: 0 00:09:29.733 Lifetime Error Log Entries: 0 00:09:29.733 Warning Temperature Time: 0 minutes 00:09:29.733 Critical Temperature Time: 0 minutes 00:09:29.733 00:09:29.733 Number of Queues 00:09:29.733 ================ 00:09:29.733 Number of I/O Submission Queues: 64 00:09:29.733 Number of I/O Completion Queues: 64 00:09:29.733 00:09:29.733 ZNS Specific Controller Data 00:09:29.733 ============================ 00:09:29.733 Zone Append Size Limit: 0 00:09:29.733 00:09:29.733 00:09:29.733 Active Namespaces 00:09:29.733 ================= 00:09:29.733 Namespace ID:1 00:09:29.733 Error Recovery Timeout: Unlimited 00:09:29.733 Command Set Identifier: NVM (00h) 00:09:29.733 Deallocate: Supported 00:09:29.733 Deallocated/Unwritten Error: Supported 00:09:29.733 Deallocated Read Value: All 0x00 00:09:29.733 Deallocate in Write Zeroes: Not Supported 00:09:29.733 Deallocated Guard Field: 0xFFFF 00:09:29.733 Flush: Supported 00:09:29.733 Reservation: Not Supported 00:09:29.733 Metadata Transferred as: Separate Metadata Buffer 00:09:29.733 Namespace Sharing Capabilities: Private 00:09:29.733 Size (in LBAs): 1548666 (5GiB) 00:09:29.733 Capacity (in LBAs): 1548666 (5GiB) 00:09:29.733 Utilization (in LBAs): 1548666 (5GiB) 00:09:29.733 Thin Provisioning: Not Supported 00:09:29.733 Per-NS Atomic Units: No 00:09:29.733 Maximum Single Source Range Length: 128 00:09:29.733 Maximum Copy Length: 128 00:09:29.733 Maximum Source Range Count: 128 00:09:29.733 NGUID/EUI64 Never Reused: No 00:09:29.733 Namespace Write Protected: No 00:09:29.733 Number of LBA Formats: 8 00:09:29.733 Current LBA Format: LBA Format #07 00:09:29.733 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:29.733 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:29.733 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:29.733 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:29.733 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:29.733 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:29.733 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:29.733 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:29.733 00:09:29.733 NVM Specific Namespace Data 00:09:29.733 =========================== 00:09:29.733 Logical Block Storage Tag Mask: 0 00:09:29.734 Protection Information Capabilities: 00:09:29.734 16b Guard Protection Information Storage Tag Support: No 00:09:29.734 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:29.734 Storage Tag Check Read Support: No 00:09:29.734 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.734 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.734 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.734 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.734 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.734 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.734 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.734 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:29.734 15:02:30 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:29.734 15:02:30 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:09:30.303 ===================================================== 00:09:30.303 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:30.303 ===================================================== 00:09:30.303 Controller Capabilities/Features 00:09:30.303 ================================ 00:09:30.303 Vendor ID: 1b36 00:09:30.303 Subsystem Vendor ID: 1af4 00:09:30.303 Serial Number: 12341 00:09:30.303 Model Number: QEMU NVMe Ctrl 00:09:30.303 Firmware Version: 8.0.0 00:09:30.303 Recommended Arb Burst: 6 00:09:30.303 IEEE OUI Identifier: 00 54 52 00:09:30.303 Multi-path I/O 00:09:30.303 May have multiple subsystem ports: No 00:09:30.303 May have multiple controllers: No 00:09:30.303 Associated with SR-IOV VF: No 00:09:30.303 Max Data Transfer Size: 524288 00:09:30.303 Max Number of Namespaces: 256 00:09:30.303 Max Number of I/O Queues: 64 00:09:30.303 NVMe Specification Version (VS): 1.4 00:09:30.303 NVMe Specification Version (Identify): 1.4 00:09:30.303 Maximum Queue Entries: 2048 00:09:30.303 Contiguous Queues Required: Yes 00:09:30.303 Arbitration Mechanisms Supported 00:09:30.303 Weighted Round Robin: Not Supported 00:09:30.303 Vendor Specific: Not Supported 00:09:30.303 Reset Timeout: 7500 ms 00:09:30.303 Doorbell Stride: 4 bytes 00:09:30.303 NVM Subsystem Reset: Not Supported 00:09:30.303 Command Sets Supported 00:09:30.303 NVM Command Set: Supported 00:09:30.303 Boot Partition: Not Supported 00:09:30.303 Memory Page Size Minimum: 4096 bytes 00:09:30.303 Memory Page Size Maximum: 65536 bytes 00:09:30.303 Persistent Memory Region: Not Supported 00:09:30.303 Optional Asynchronous Events Supported 00:09:30.303 Namespace Attribute Notices: Supported 00:09:30.303 Firmware Activation Notices: Not Supported 00:09:30.303 ANA Change Notices: Not Supported 00:09:30.303 PLE Aggregate Log Change Notices: Not Supported 00:09:30.303 LBA Status Info Alert Notices: Not Supported 00:09:30.303 EGE Aggregate Log Change Notices: Not Supported 00:09:30.303 Normal NVM Subsystem Shutdown event: Not Supported 00:09:30.303 Zone Descriptor Change Notices: Not Supported 00:09:30.303 Discovery Log Change Notices: Not Supported 00:09:30.303 Controller Attributes 00:09:30.303 128-bit Host Identifier: Not Supported 00:09:30.303 Non-Operational Permissive Mode: Not Supported 00:09:30.303 NVM Sets: Not Supported 00:09:30.303 Read Recovery Levels: Not Supported 00:09:30.303 Endurance Groups: Not Supported 00:09:30.303 Predictable Latency Mode: Not Supported 00:09:30.303 Traffic Based Keep ALive: Not Supported 00:09:30.303 Namespace Granularity: Not Supported 00:09:30.303 SQ Associations: Not Supported 00:09:30.303 UUID List: Not Supported 00:09:30.303 Multi-Domain Subsystem: Not Supported 00:09:30.303 Fixed Capacity Management: Not Supported 00:09:30.303 Variable Capacity Management: Not Supported 00:09:30.303 Delete Endurance Group: Not Supported 00:09:30.303 Delete NVM Set: Not Supported 00:09:30.303 Extended LBA Formats Supported: Supported 00:09:30.303 Flexible Data Placement Supported: Not Supported 00:09:30.303 00:09:30.303 Controller Memory Buffer Support 00:09:30.303 ================================ 00:09:30.303 Supported: No 00:09:30.303 00:09:30.303 Persistent Memory Region Support 00:09:30.303 ================================ 00:09:30.303 Supported: No 00:09:30.303 00:09:30.303 Admin Command Set Attributes 00:09:30.303 ============================ 00:09:30.303 Security Send/Receive: Not Supported 00:09:30.303 Format NVM: Supported 00:09:30.303 Firmware Activate/Download: Not Supported 00:09:30.303 Namespace Management: Supported 00:09:30.303 Device Self-Test: Not Supported 00:09:30.303 Directives: Supported 00:09:30.303 NVMe-MI: Not Supported 00:09:30.303 Virtualization Management: Not Supported 00:09:30.303 Doorbell Buffer Config: Supported 00:09:30.303 Get LBA Status Capability: Not Supported 00:09:30.303 Command & Feature Lockdown Capability: Not Supported 00:09:30.303 Abort Command Limit: 4 00:09:30.303 Async Event Request Limit: 4 00:09:30.303 Number of Firmware Slots: N/A 00:09:30.303 Firmware Slot 1 Read-Only: N/A 00:09:30.303 Firmware Activation Without Reset: N/A 00:09:30.303 Multiple Update Detection Support: N/A 00:09:30.303 Firmware Update Granularity: No Information Provided 00:09:30.303 Per-Namespace SMART Log: Yes 00:09:30.303 Asymmetric Namespace Access Log Page: Not Supported 00:09:30.303 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:30.303 Command Effects Log Page: Supported 00:09:30.303 Get Log Page Extended Data: Supported 00:09:30.303 Telemetry Log Pages: Not Supported 00:09:30.303 Persistent Event Log Pages: Not Supported 00:09:30.303 Supported Log Pages Log Page: May Support 00:09:30.303 Commands Supported & Effects Log Page: Not Supported 00:09:30.303 Feature Identifiers & Effects Log Page:May Support 00:09:30.303 NVMe-MI Commands & Effects Log Page: May Support 00:09:30.303 Data Area 4 for Telemetry Log: Not Supported 00:09:30.303 Error Log Page Entries Supported: 1 00:09:30.303 Keep Alive: Not Supported 00:09:30.304 00:09:30.304 NVM Command Set Attributes 00:09:30.304 ========================== 00:09:30.304 Submission Queue Entry Size 00:09:30.304 Max: 64 00:09:30.304 Min: 64 00:09:30.304 Completion Queue Entry Size 00:09:30.304 Max: 16 00:09:30.304 Min: 16 00:09:30.304 Number of Namespaces: 256 00:09:30.304 Compare Command: Supported 00:09:30.304 Write Uncorrectable Command: Not Supported 00:09:30.304 Dataset Management Command: Supported 00:09:30.304 Write Zeroes Command: Supported 00:09:30.304 Set Features Save Field: Supported 00:09:30.304 Reservations: Not Supported 00:09:30.304 Timestamp: Supported 00:09:30.304 Copy: Supported 00:09:30.304 Volatile Write Cache: Present 00:09:30.304 Atomic Write Unit (Normal): 1 00:09:30.304 Atomic Write Unit (PFail): 1 00:09:30.304 Atomic Compare & Write Unit: 1 00:09:30.304 Fused Compare & Write: Not Supported 00:09:30.304 Scatter-Gather List 00:09:30.304 SGL Command Set: Supported 00:09:30.304 SGL Keyed: Not Supported 00:09:30.304 SGL Bit Bucket Descriptor: Not Supported 00:09:30.304 SGL Metadata Pointer: Not Supported 00:09:30.304 Oversized SGL: Not Supported 00:09:30.304 SGL Metadata Address: Not Supported 00:09:30.304 SGL Offset: Not Supported 00:09:30.304 Transport SGL Data Block: Not Supported 00:09:30.304 Replay Protected Memory Block: Not Supported 00:09:30.304 00:09:30.304 Firmware Slot Information 00:09:30.304 ========================= 00:09:30.304 Active slot: 1 00:09:30.304 Slot 1 Firmware Revision: 1.0 00:09:30.304 00:09:30.304 00:09:30.304 Commands Supported and Effects 00:09:30.304 ============================== 00:09:30.304 Admin Commands 00:09:30.304 -------------- 00:09:30.304 Delete I/O Submission Queue (00h): Supported 00:09:30.304 Create I/O Submission Queue (01h): Supported 00:09:30.304 Get Log Page (02h): Supported 00:09:30.304 Delete I/O Completion Queue (04h): Supported 00:09:30.304 Create I/O Completion Queue (05h): Supported 00:09:30.304 Identify (06h): Supported 00:09:30.304 Abort (08h): Supported 00:09:30.304 Set Features (09h): Supported 00:09:30.304 Get Features (0Ah): Supported 00:09:30.304 Asynchronous Event Request (0Ch): Supported 00:09:30.304 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:30.304 Directive Send (19h): Supported 00:09:30.304 Directive Receive (1Ah): Supported 00:09:30.304 Virtualization Management (1Ch): Supported 00:09:30.304 Doorbell Buffer Config (7Ch): Supported 00:09:30.304 Format NVM (80h): Supported LBA-Change 00:09:30.304 I/O Commands 00:09:30.304 ------------ 00:09:30.304 Flush (00h): Supported LBA-Change 00:09:30.304 Write (01h): Supported LBA-Change 00:09:30.304 Read (02h): Supported 00:09:30.304 Compare (05h): Supported 00:09:30.304 Write Zeroes (08h): Supported LBA-Change 00:09:30.304 Dataset Management (09h): Supported LBA-Change 00:09:30.304 Unknown (0Ch): Supported 00:09:30.304 Unknown (12h): Supported 00:09:30.304 Copy (19h): Supported LBA-Change 00:09:30.304 Unknown (1Dh): Supported LBA-Change 00:09:30.304 00:09:30.304 Error Log 00:09:30.304 ========= 00:09:30.304 00:09:30.304 Arbitration 00:09:30.304 =========== 00:09:30.304 Arbitration Burst: no limit 00:09:30.304 00:09:30.304 Power Management 00:09:30.304 ================ 00:09:30.304 Number of Power States: 1 00:09:30.304 Current Power State: Power State #0 00:09:30.304 Power State #0: 00:09:30.304 Max Power: 25.00 W 00:09:30.304 Non-Operational State: Operational 00:09:30.304 Entry Latency: 16 microseconds 00:09:30.304 Exit Latency: 4 microseconds 00:09:30.304 Relative Read Throughput: 0 00:09:30.304 Relative Read Latency: 0 00:09:30.304 Relative Write Throughput: 0 00:09:30.304 Relative Write Latency: 0 00:09:30.304 Idle Power: Not Reported 00:09:30.304 Active Power: Not Reported 00:09:30.304 Non-Operational Permissive Mode: Not Supported 00:09:30.304 00:09:30.304 Health Information 00:09:30.304 ================== 00:09:30.304 Critical Warnings: 00:09:30.304 Available Spare Space: OK 00:09:30.304 Temperature: OK 00:09:30.304 Device Reliability: OK 00:09:30.304 Read Only: No 00:09:30.304 Volatile Memory Backup: OK 00:09:30.304 Current Temperature: 323 Kelvin (50 Celsius) 00:09:30.304 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:30.304 Available Spare: 0% 00:09:30.304 Available Spare Threshold: 0% 00:09:30.304 Life Percentage Used: 0% 00:09:30.304 Data Units Read: 1134 00:09:30.304 Data Units Written: 996 00:09:30.304 Host Read Commands: 53373 00:09:30.304 Host Write Commands: 52073 00:09:30.304 Controller Busy Time: 0 minutes 00:09:30.304 Power Cycles: 0 00:09:30.304 Power On Hours: 0 hours 00:09:30.304 Unsafe Shutdowns: 0 00:09:30.304 Unrecoverable Media Errors: 0 00:09:30.304 Lifetime Error Log Entries: 0 00:09:30.304 Warning Temperature Time: 0 minutes 00:09:30.304 Critical Temperature Time: 0 minutes 00:09:30.304 00:09:30.304 Number of Queues 00:09:30.304 ================ 00:09:30.304 Number of I/O Submission Queues: 64 00:09:30.304 Number of I/O Completion Queues: 64 00:09:30.304 00:09:30.304 ZNS Specific Controller Data 00:09:30.304 ============================ 00:09:30.304 Zone Append Size Limit: 0 00:09:30.304 00:09:30.304 00:09:30.304 Active Namespaces 00:09:30.304 ================= 00:09:30.304 Namespace ID:1 00:09:30.304 Error Recovery Timeout: Unlimited 00:09:30.304 Command Set Identifier: NVM (00h) 00:09:30.304 Deallocate: Supported 00:09:30.304 Deallocated/Unwritten Error: Supported 00:09:30.304 Deallocated Read Value: All 0x00 00:09:30.304 Deallocate in Write Zeroes: Not Supported 00:09:30.304 Deallocated Guard Field: 0xFFFF 00:09:30.304 Flush: Supported 00:09:30.304 Reservation: Not Supported 00:09:30.304 Namespace Sharing Capabilities: Private 00:09:30.304 Size (in LBAs): 1310720 (5GiB) 00:09:30.304 Capacity (in LBAs): 1310720 (5GiB) 00:09:30.304 Utilization (in LBAs): 1310720 (5GiB) 00:09:30.304 Thin Provisioning: Not Supported 00:09:30.304 Per-NS Atomic Units: No 00:09:30.304 Maximum Single Source Range Length: 128 00:09:30.304 Maximum Copy Length: 128 00:09:30.304 Maximum Source Range Count: 128 00:09:30.304 NGUID/EUI64 Never Reused: No 00:09:30.304 Namespace Write Protected: No 00:09:30.304 Number of LBA Formats: 8 00:09:30.304 Current LBA Format: LBA Format #04 00:09:30.304 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:30.304 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:30.304 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:30.304 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:30.304 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:30.304 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:30.304 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:30.304 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:30.304 00:09:30.304 NVM Specific Namespace Data 00:09:30.304 =========================== 00:09:30.304 Logical Block Storage Tag Mask: 0 00:09:30.304 Protection Information Capabilities: 00:09:30.304 16b Guard Protection Information Storage Tag Support: No 00:09:30.305 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:30.305 Storage Tag Check Read Support: No 00:09:30.305 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:30.305 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:30.305 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:30.305 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:30.305 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:30.305 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:30.305 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:30.305 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:30.305 15:02:30 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:30.305 15:02:30 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:09:30.564 ===================================================== 00:09:30.564 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:30.564 ===================================================== 00:09:30.564 Controller Capabilities/Features 00:09:30.564 ================================ 00:09:30.564 Vendor ID: 1b36 00:09:30.564 Subsystem Vendor ID: 1af4 00:09:30.564 Serial Number: 12342 00:09:30.564 Model Number: QEMU NVMe Ctrl 00:09:30.564 Firmware Version: 8.0.0 00:09:30.564 Recommended Arb Burst: 6 00:09:30.564 IEEE OUI Identifier: 00 54 52 00:09:30.564 Multi-path I/O 00:09:30.564 May have multiple subsystem ports: No 00:09:30.564 May have multiple controllers: No 00:09:30.564 Associated with SR-IOV VF: No 00:09:30.564 Max Data Transfer Size: 524288 00:09:30.564 Max Number of Namespaces: 256 00:09:30.564 Max Number of I/O Queues: 64 00:09:30.564 NVMe Specification Version (VS): 1.4 00:09:30.564 NVMe Specification Version (Identify): 1.4 00:09:30.564 Maximum Queue Entries: 2048 00:09:30.564 Contiguous Queues Required: Yes 00:09:30.564 Arbitration Mechanisms Supported 00:09:30.564 Weighted Round Robin: Not Supported 00:09:30.564 Vendor Specific: Not Supported 00:09:30.564 Reset Timeout: 7500 ms 00:09:30.564 Doorbell Stride: 4 bytes 00:09:30.564 NVM Subsystem Reset: Not Supported 00:09:30.564 Command Sets Supported 00:09:30.564 NVM Command Set: Supported 00:09:30.564 Boot Partition: Not Supported 00:09:30.564 Memory Page Size Minimum: 4096 bytes 00:09:30.564 Memory Page Size Maximum: 65536 bytes 00:09:30.564 Persistent Memory Region: Not Supported 00:09:30.564 Optional Asynchronous Events Supported 00:09:30.564 Namespace Attribute Notices: Supported 00:09:30.564 Firmware Activation Notices: Not Supported 00:09:30.564 ANA Change Notices: Not Supported 00:09:30.564 PLE Aggregate Log Change Notices: Not Supported 00:09:30.564 LBA Status Info Alert Notices: Not Supported 00:09:30.564 EGE Aggregate Log Change Notices: Not Supported 00:09:30.564 Normal NVM Subsystem Shutdown event: Not Supported 00:09:30.564 Zone Descriptor Change Notices: Not Supported 00:09:30.564 Discovery Log Change Notices: Not Supported 00:09:30.564 Controller Attributes 00:09:30.564 128-bit Host Identifier: Not Supported 00:09:30.564 Non-Operational Permissive Mode: Not Supported 00:09:30.564 NVM Sets: Not Supported 00:09:30.564 Read Recovery Levels: Not Supported 00:09:30.564 Endurance Groups: Not Supported 00:09:30.564 Predictable Latency Mode: Not Supported 00:09:30.564 Traffic Based Keep ALive: Not Supported 00:09:30.564 Namespace Granularity: Not Supported 00:09:30.564 SQ Associations: Not Supported 00:09:30.564 UUID List: Not Supported 00:09:30.564 Multi-Domain Subsystem: Not Supported 00:09:30.564 Fixed Capacity Management: Not Supported 00:09:30.564 Variable Capacity Management: Not Supported 00:09:30.564 Delete Endurance Group: Not Supported 00:09:30.564 Delete NVM Set: Not Supported 00:09:30.564 Extended LBA Formats Supported: Supported 00:09:30.564 Flexible Data Placement Supported: Not Supported 00:09:30.564 00:09:30.564 Controller Memory Buffer Support 00:09:30.564 ================================ 00:09:30.564 Supported: No 00:09:30.564 00:09:30.564 Persistent Memory Region Support 00:09:30.564 ================================ 00:09:30.564 Supported: No 00:09:30.564 00:09:30.564 Admin Command Set Attributes 00:09:30.564 ============================ 00:09:30.564 Security Send/Receive: Not Supported 00:09:30.564 Format NVM: Supported 00:09:30.564 Firmware Activate/Download: Not Supported 00:09:30.564 Namespace Management: Supported 00:09:30.564 Device Self-Test: Not Supported 00:09:30.564 Directives: Supported 00:09:30.564 NVMe-MI: Not Supported 00:09:30.564 Virtualization Management: Not Supported 00:09:30.564 Doorbell Buffer Config: Supported 00:09:30.564 Get LBA Status Capability: Not Supported 00:09:30.564 Command & Feature Lockdown Capability: Not Supported 00:09:30.564 Abort Command Limit: 4 00:09:30.564 Async Event Request Limit: 4 00:09:30.564 Number of Firmware Slots: N/A 00:09:30.564 Firmware Slot 1 Read-Only: N/A 00:09:30.564 Firmware Activation Without Reset: N/A 00:09:30.564 Multiple Update Detection Support: N/A 00:09:30.564 Firmware Update Granularity: No Information Provided 00:09:30.564 Per-Namespace SMART Log: Yes 00:09:30.564 Asymmetric Namespace Access Log Page: Not Supported 00:09:30.564 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:30.564 Command Effects Log Page: Supported 00:09:30.564 Get Log Page Extended Data: Supported 00:09:30.564 Telemetry Log Pages: Not Supported 00:09:30.564 Persistent Event Log Pages: Not Supported 00:09:30.564 Supported Log Pages Log Page: May Support 00:09:30.564 Commands Supported & Effects Log Page: Not Supported 00:09:30.564 Feature Identifiers & Effects Log Page:May Support 00:09:30.564 NVMe-MI Commands & Effects Log Page: May Support 00:09:30.564 Data Area 4 for Telemetry Log: Not Supported 00:09:30.564 Error Log Page Entries Supported: 1 00:09:30.564 Keep Alive: Not Supported 00:09:30.564 00:09:30.564 NVM Command Set Attributes 00:09:30.564 ========================== 00:09:30.564 Submission Queue Entry Size 00:09:30.564 Max: 64 00:09:30.564 Min: 64 00:09:30.564 Completion Queue Entry Size 00:09:30.564 Max: 16 00:09:30.564 Min: 16 00:09:30.564 Number of Namespaces: 256 00:09:30.564 Compare Command: Supported 00:09:30.564 Write Uncorrectable Command: Not Supported 00:09:30.564 Dataset Management Command: Supported 00:09:30.564 Write Zeroes Command: Supported 00:09:30.564 Set Features Save Field: Supported 00:09:30.564 Reservations: Not Supported 00:09:30.564 Timestamp: Supported 00:09:30.564 Copy: Supported 00:09:30.564 Volatile Write Cache: Present 00:09:30.564 Atomic Write Unit (Normal): 1 00:09:30.564 Atomic Write Unit (PFail): 1 00:09:30.564 Atomic Compare & Write Unit: 1 00:09:30.564 Fused Compare & Write: Not Supported 00:09:30.564 Scatter-Gather List 00:09:30.564 SGL Command Set: Supported 00:09:30.564 SGL Keyed: Not Supported 00:09:30.564 SGL Bit Bucket Descriptor: Not Supported 00:09:30.564 SGL Metadata Pointer: Not Supported 00:09:30.564 Oversized SGL: Not Supported 00:09:30.564 SGL Metadata Address: Not Supported 00:09:30.564 SGL Offset: Not Supported 00:09:30.564 Transport SGL Data Block: Not Supported 00:09:30.564 Replay Protected Memory Block: Not Supported 00:09:30.564 00:09:30.564 Firmware Slot Information 00:09:30.564 ========================= 00:09:30.564 Active slot: 1 00:09:30.564 Slot 1 Firmware Revision: 1.0 00:09:30.564 00:09:30.564 00:09:30.564 Commands Supported and Effects 00:09:30.564 ============================== 00:09:30.564 Admin Commands 00:09:30.564 -------------- 00:09:30.564 Delete I/O Submission Queue (00h): Supported 00:09:30.564 Create I/O Submission Queue (01h): Supported 00:09:30.564 Get Log Page (02h): Supported 00:09:30.564 Delete I/O Completion Queue (04h): Supported 00:09:30.564 Create I/O Completion Queue (05h): Supported 00:09:30.564 Identify (06h): Supported 00:09:30.564 Abort (08h): Supported 00:09:30.564 Set Features (09h): Supported 00:09:30.564 Get Features (0Ah): Supported 00:09:30.564 Asynchronous Event Request (0Ch): Supported 00:09:30.564 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:30.565 Directive Send (19h): Supported 00:09:30.565 Directive Receive (1Ah): Supported 00:09:30.565 Virtualization Management (1Ch): Supported 00:09:30.565 Doorbell Buffer Config (7Ch): Supported 00:09:30.565 Format NVM (80h): Supported LBA-Change 00:09:30.565 I/O Commands 00:09:30.565 ------------ 00:09:30.565 Flush (00h): Supported LBA-Change 00:09:30.565 Write (01h): Supported LBA-Change 00:09:30.565 Read (02h): Supported 00:09:30.565 Compare (05h): Supported 00:09:30.565 Write Zeroes (08h): Supported LBA-Change 00:09:30.565 Dataset Management (09h): Supported LBA-Change 00:09:30.565 Unknown (0Ch): Supported 00:09:30.565 Unknown (12h): Supported 00:09:30.565 Copy (19h): Supported LBA-Change 00:09:30.565 Unknown (1Dh): Supported LBA-Change 00:09:30.565 00:09:30.565 Error Log 00:09:30.565 ========= 00:09:30.565 00:09:30.565 Arbitration 00:09:30.565 =========== 00:09:30.565 Arbitration Burst: no limit 00:09:30.565 00:09:30.565 Power Management 00:09:30.565 ================ 00:09:30.565 Number of Power States: 1 00:09:30.565 Current Power State: Power State #0 00:09:30.565 Power State #0: 00:09:30.565 Max Power: 25.00 W 00:09:30.565 Non-Operational State: Operational 00:09:30.565 Entry Latency: 16 microseconds 00:09:30.565 Exit Latency: 4 microseconds 00:09:30.565 Relative Read Throughput: 0 00:09:30.565 Relative Read Latency: 0 00:09:30.565 Relative Write Throughput: 0 00:09:30.565 Relative Write Latency: 0 00:09:30.565 Idle Power: Not Reported 00:09:30.565 Active Power: Not Reported 00:09:30.565 Non-Operational Permissive Mode: Not Supported 00:09:30.565 00:09:30.565 Health Information 00:09:30.565 ================== 00:09:30.565 Critical Warnings: 00:09:30.565 Available Spare Space: OK 00:09:30.565 Temperature: OK 00:09:30.565 Device Reliability: OK 00:09:30.565 Read Only: No 00:09:30.565 Volatile Memory Backup: OK 00:09:30.565 Current Temperature: 323 Kelvin (50 Celsius) 00:09:30.565 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:30.565 Available Spare: 0% 00:09:30.565 Available Spare Threshold: 0% 00:09:30.565 Life Percentage Used: 0% 00:09:30.565 Data Units Read: 2333 00:09:30.565 Data Units Written: 2120 00:09:30.565 Host Read Commands: 107927 00:09:30.565 Host Write Commands: 106196 00:09:30.565 Controller Busy Time: 0 minutes 00:09:30.565 Power Cycles: 0 00:09:30.565 Power On Hours: 0 hours 00:09:30.565 Unsafe Shutdowns: 0 00:09:30.565 Unrecoverable Media Errors: 0 00:09:30.565 Lifetime Error Log Entries: 0 00:09:30.565 Warning Temperature Time: 0 minutes 00:09:30.565 Critical Temperature Time: 0 minutes 00:09:30.565 00:09:30.565 Number of Queues 00:09:30.565 ================ 00:09:30.565 Number of I/O Submission Queues: 64 00:09:30.565 Number of I/O Completion Queues: 64 00:09:30.565 00:09:30.565 ZNS Specific Controller Data 00:09:30.565 ============================ 00:09:30.565 Zone Append Size Limit: 0 00:09:30.565 00:09:30.565 00:09:30.565 Active Namespaces 00:09:30.565 ================= 00:09:30.565 Namespace ID:1 00:09:30.565 Error Recovery Timeout: Unlimited 00:09:30.565 Command Set Identifier: NVM (00h) 00:09:30.565 Deallocate: Supported 00:09:30.565 Deallocated/Unwritten Error: Supported 00:09:30.565 Deallocated Read Value: All 0x00 00:09:30.565 Deallocate in Write Zeroes: Not Supported 00:09:30.565 Deallocated Guard Field: 0xFFFF 00:09:30.565 Flush: Supported 00:09:30.565 Reservation: Not Supported 00:09:30.565 Namespace Sharing Capabilities: Private 00:09:30.565 Size (in LBAs): 1048576 (4GiB) 00:09:30.565 Capacity (in LBAs): 1048576 (4GiB) 00:09:30.565 Utilization (in LBAs): 1048576 (4GiB) 00:09:30.565 Thin Provisioning: Not Supported 00:09:30.565 Per-NS Atomic Units: No 00:09:30.565 Maximum Single Source Range Length: 128 00:09:30.565 Maximum Copy Length: 128 00:09:30.565 Maximum Source Range Count: 128 00:09:30.565 NGUID/EUI64 Never Reused: No 00:09:30.565 Namespace Write Protected: No 00:09:30.565 Number of LBA Formats: 8 00:09:30.565 Current LBA Format: LBA Format #04 00:09:30.565 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:30.565 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:30.565 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:30.565 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:30.565 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:30.565 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:30.565 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:30.565 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:30.565 00:09:30.565 NVM Specific Namespace Data 00:09:30.565 =========================== 00:09:30.565 Logical Block Storage Tag Mask: 0 00:09:30.565 Protection Information Capabilities: 00:09:30.565 16b Guard Protection Information Storage Tag Support: No 00:09:30.565 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:30.565 Storage Tag Check Read Support: No 00:09:30.565 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:30.565 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:30.565 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:30.565 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:30.565 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:30.565 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:30.565 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:30.565 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:30.565 Namespace ID:2 00:09:30.565 Error Recovery Timeout: Unlimited 00:09:30.565 Command Set Identifier: NVM (00h) 00:09:30.565 Deallocate: Supported 00:09:30.565 Deallocated/Unwritten Error: Supported 00:09:30.565 Deallocated Read Value: All 0x00 00:09:30.565 Deallocate in Write Zeroes: Not Supported 00:09:30.565 Deallocated Guard Field: 0xFFFF 00:09:30.565 Flush: Supported 00:09:30.565 Reservation: Not Supported 00:09:30.565 Namespace Sharing Capabilities: Private 00:09:30.565 Size (in LBAs): 1048576 (4GiB) 00:09:30.565 Capacity (in LBAs): 1048576 (4GiB) 00:09:30.565 Utilization (in LBAs): 1048576 (4GiB) 00:09:30.565 Thin Provisioning: Not Supported 00:09:30.565 Per-NS Atomic Units: No 00:09:30.565 Maximum Single Source Range Length: 128 00:09:30.565 Maximum Copy Length: 128 00:09:30.565 Maximum Source Range Count: 128 00:09:30.565 NGUID/EUI64 Never Reused: No 00:09:30.565 Namespace Write Protected: No 00:09:30.565 Number of LBA Formats: 8 00:09:30.565 Current LBA Format: LBA Format #04 00:09:30.565 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:30.565 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:30.565 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:30.565 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:30.565 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:30.565 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:30.565 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:30.565 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:30.565 00:09:30.565 NVM Specific Namespace Data 00:09:30.565 =========================== 00:09:30.565 Logical Block Storage Tag Mask: 0 00:09:30.565 Protection Information Capabilities: 00:09:30.565 16b Guard Protection Information Storage Tag Support: No 00:09:30.565 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:30.565 Storage Tag Check Read Support: No 00:09:30.565 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:30.565 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:30.566 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:30.566 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:30.566 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:30.566 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:30.566 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:30.566 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:30.566 Namespace ID:3 00:09:30.566 Error Recovery Timeout: Unlimited 00:09:30.566 Command Set Identifier: NVM (00h) 00:09:30.566 Deallocate: Supported 00:09:30.566 Deallocated/Unwritten Error: Supported 00:09:30.566 Deallocated Read Value: All 0x00 00:09:30.566 Deallocate in Write Zeroes: Not Supported 00:09:30.566 Deallocated Guard Field: 0xFFFF 00:09:30.566 Flush: Supported 00:09:30.566 Reservation: Not Supported 00:09:30.566 Namespace Sharing Capabilities: Private 00:09:30.566 Size (in LBAs): 1048576 (4GiB) 00:09:30.566 Capacity (in LBAs): 1048576 (4GiB) 00:09:30.566 Utilization (in LBAs): 1048576 (4GiB) 00:09:30.566 Thin Provisioning: Not Supported 00:09:30.566 Per-NS Atomic Units: No 00:09:30.566 Maximum Single Source Range Length: 128 00:09:30.566 Maximum Copy Length: 128 00:09:30.566 Maximum Source Range Count: 128 00:09:30.566 NGUID/EUI64 Never Reused: No 00:09:30.566 Namespace Write Protected: No 00:09:30.566 Number of LBA Formats: 8 00:09:30.566 Current LBA Format: LBA Format #04 00:09:30.566 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:30.566 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:30.566 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:30.566 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:30.566 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:30.566 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:30.566 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:30.566 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:30.566 00:09:30.566 NVM Specific Namespace Data 00:09:30.566 =========================== 00:09:30.566 Logical Block Storage Tag Mask: 0 00:09:30.566 Protection Information Capabilities: 00:09:30.566 16b Guard Protection Information Storage Tag Support: No 00:09:30.566 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:30.566 Storage Tag Check Read Support: No 00:09:30.566 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:30.566 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:30.566 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:30.566 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:30.566 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:30.566 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:30.566 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:30.566 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:30.566 15:02:31 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:30.566 15:02:31 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:09:30.825 ===================================================== 00:09:30.825 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:30.825 ===================================================== 00:09:30.825 Controller Capabilities/Features 00:09:30.825 ================================ 00:09:30.825 Vendor ID: 1b36 00:09:30.825 Subsystem Vendor ID: 1af4 00:09:30.825 Serial Number: 12343 00:09:30.825 Model Number: QEMU NVMe Ctrl 00:09:30.825 Firmware Version: 8.0.0 00:09:30.825 Recommended Arb Burst: 6 00:09:30.825 IEEE OUI Identifier: 00 54 52 00:09:30.825 Multi-path I/O 00:09:30.825 May have multiple subsystem ports: No 00:09:30.825 May have multiple controllers: Yes 00:09:30.825 Associated with SR-IOV VF: No 00:09:30.825 Max Data Transfer Size: 524288 00:09:30.825 Max Number of Namespaces: 256 00:09:30.825 Max Number of I/O Queues: 64 00:09:30.825 NVMe Specification Version (VS): 1.4 00:09:30.825 NVMe Specification Version (Identify): 1.4 00:09:30.825 Maximum Queue Entries: 2048 00:09:30.825 Contiguous Queues Required: Yes 00:09:30.825 Arbitration Mechanisms Supported 00:09:30.825 Weighted Round Robin: Not Supported 00:09:30.825 Vendor Specific: Not Supported 00:09:30.825 Reset Timeout: 7500 ms 00:09:30.825 Doorbell Stride: 4 bytes 00:09:30.825 NVM Subsystem Reset: Not Supported 00:09:30.825 Command Sets Supported 00:09:30.825 NVM Command Set: Supported 00:09:30.825 Boot Partition: Not Supported 00:09:30.825 Memory Page Size Minimum: 4096 bytes 00:09:30.825 Memory Page Size Maximum: 65536 bytes 00:09:30.825 Persistent Memory Region: Not Supported 00:09:30.825 Optional Asynchronous Events Supported 00:09:30.825 Namespace Attribute Notices: Supported 00:09:30.825 Firmware Activation Notices: Not Supported 00:09:30.825 ANA Change Notices: Not Supported 00:09:30.825 PLE Aggregate Log Change Notices: Not Supported 00:09:30.825 LBA Status Info Alert Notices: Not Supported 00:09:30.825 EGE Aggregate Log Change Notices: Not Supported 00:09:30.825 Normal NVM Subsystem Shutdown event: Not Supported 00:09:30.825 Zone Descriptor Change Notices: Not Supported 00:09:30.825 Discovery Log Change Notices: Not Supported 00:09:30.825 Controller Attributes 00:09:30.825 128-bit Host Identifier: Not Supported 00:09:30.825 Non-Operational Permissive Mode: Not Supported 00:09:30.825 NVM Sets: Not Supported 00:09:30.825 Read Recovery Levels: Not Supported 00:09:30.825 Endurance Groups: Supported 00:09:30.825 Predictable Latency Mode: Not Supported 00:09:30.825 Traffic Based Keep ALive: Not Supported 00:09:30.825 Namespace Granularity: Not Supported 00:09:30.825 SQ Associations: Not Supported 00:09:30.825 UUID List: Not Supported 00:09:30.825 Multi-Domain Subsystem: Not Supported 00:09:30.825 Fixed Capacity Management: Not Supported 00:09:30.825 Variable Capacity Management: Not Supported 00:09:30.825 Delete Endurance Group: Not Supported 00:09:30.825 Delete NVM Set: Not Supported 00:09:30.825 Extended LBA Formats Supported: Supported 00:09:30.825 Flexible Data Placement Supported: Supported 00:09:30.825 00:09:30.825 Controller Memory Buffer Support 00:09:30.825 ================================ 00:09:30.825 Supported: No 00:09:30.825 00:09:30.825 Persistent Memory Region Support 00:09:30.825 ================================ 00:09:30.825 Supported: No 00:09:30.825 00:09:30.825 Admin Command Set Attributes 00:09:30.825 ============================ 00:09:30.825 Security Send/Receive: Not Supported 00:09:30.825 Format NVM: Supported 00:09:30.825 Firmware Activate/Download: Not Supported 00:09:30.825 Namespace Management: Supported 00:09:30.825 Device Self-Test: Not Supported 00:09:30.825 Directives: Supported 00:09:30.825 NVMe-MI: Not Supported 00:09:30.825 Virtualization Management: Not Supported 00:09:30.825 Doorbell Buffer Config: Supported 00:09:30.825 Get LBA Status Capability: Not Supported 00:09:30.826 Command & Feature Lockdown Capability: Not Supported 00:09:30.826 Abort Command Limit: 4 00:09:30.826 Async Event Request Limit: 4 00:09:30.826 Number of Firmware Slots: N/A 00:09:30.826 Firmware Slot 1 Read-Only: N/A 00:09:30.826 Firmware Activation Without Reset: N/A 00:09:30.826 Multiple Update Detection Support: N/A 00:09:30.826 Firmware Update Granularity: No Information Provided 00:09:30.826 Per-Namespace SMART Log: Yes 00:09:30.826 Asymmetric Namespace Access Log Page: Not Supported 00:09:30.826 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:30.826 Command Effects Log Page: Supported 00:09:30.826 Get Log Page Extended Data: Supported 00:09:30.826 Telemetry Log Pages: Not Supported 00:09:30.826 Persistent Event Log Pages: Not Supported 00:09:30.826 Supported Log Pages Log Page: May Support 00:09:30.826 Commands Supported & Effects Log Page: Not Supported 00:09:30.826 Feature Identifiers & Effects Log Page:May Support 00:09:30.826 NVMe-MI Commands & Effects Log Page: May Support 00:09:30.826 Data Area 4 for Telemetry Log: Not Supported 00:09:30.826 Error Log Page Entries Supported: 1 00:09:30.826 Keep Alive: Not Supported 00:09:30.826 00:09:30.826 NVM Command Set Attributes 00:09:30.826 ========================== 00:09:30.826 Submission Queue Entry Size 00:09:30.826 Max: 64 00:09:30.826 Min: 64 00:09:30.826 Completion Queue Entry Size 00:09:30.826 Max: 16 00:09:30.826 Min: 16 00:09:30.826 Number of Namespaces: 256 00:09:30.826 Compare Command: Supported 00:09:30.826 Write Uncorrectable Command: Not Supported 00:09:30.826 Dataset Management Command: Supported 00:09:30.826 Write Zeroes Command: Supported 00:09:30.826 Set Features Save Field: Supported 00:09:30.826 Reservations: Not Supported 00:09:30.826 Timestamp: Supported 00:09:30.826 Copy: Supported 00:09:30.826 Volatile Write Cache: Present 00:09:30.826 Atomic Write Unit (Normal): 1 00:09:30.826 Atomic Write Unit (PFail): 1 00:09:30.826 Atomic Compare & Write Unit: 1 00:09:30.826 Fused Compare & Write: Not Supported 00:09:30.826 Scatter-Gather List 00:09:30.826 SGL Command Set: Supported 00:09:30.826 SGL Keyed: Not Supported 00:09:30.826 SGL Bit Bucket Descriptor: Not Supported 00:09:30.826 SGL Metadata Pointer: Not Supported 00:09:30.826 Oversized SGL: Not Supported 00:09:30.826 SGL Metadata Address: Not Supported 00:09:30.826 SGL Offset: Not Supported 00:09:30.826 Transport SGL Data Block: Not Supported 00:09:30.826 Replay Protected Memory Block: Not Supported 00:09:30.826 00:09:30.826 Firmware Slot Information 00:09:30.826 ========================= 00:09:30.826 Active slot: 1 00:09:30.826 Slot 1 Firmware Revision: 1.0 00:09:30.826 00:09:30.826 00:09:30.826 Commands Supported and Effects 00:09:30.826 ============================== 00:09:30.826 Admin Commands 00:09:30.826 -------------- 00:09:30.826 Delete I/O Submission Queue (00h): Supported 00:09:30.826 Create I/O Submission Queue (01h): Supported 00:09:30.826 Get Log Page (02h): Supported 00:09:30.826 Delete I/O Completion Queue (04h): Supported 00:09:30.826 Create I/O Completion Queue (05h): Supported 00:09:30.826 Identify (06h): Supported 00:09:30.826 Abort (08h): Supported 00:09:30.826 Set Features (09h): Supported 00:09:30.826 Get Features (0Ah): Supported 00:09:30.826 Asynchronous Event Request (0Ch): Supported 00:09:30.826 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:30.826 Directive Send (19h): Supported 00:09:30.826 Directive Receive (1Ah): Supported 00:09:30.826 Virtualization Management (1Ch): Supported 00:09:30.826 Doorbell Buffer Config (7Ch): Supported 00:09:30.826 Format NVM (80h): Supported LBA-Change 00:09:30.826 I/O Commands 00:09:30.826 ------------ 00:09:30.826 Flush (00h): Supported LBA-Change 00:09:30.826 Write (01h): Supported LBA-Change 00:09:30.826 Read (02h): Supported 00:09:30.826 Compare (05h): Supported 00:09:30.826 Write Zeroes (08h): Supported LBA-Change 00:09:30.826 Dataset Management (09h): Supported LBA-Change 00:09:30.826 Unknown (0Ch): Supported 00:09:30.826 Unknown (12h): Supported 00:09:30.826 Copy (19h): Supported LBA-Change 00:09:30.826 Unknown (1Dh): Supported LBA-Change 00:09:30.826 00:09:30.826 Error Log 00:09:30.826 ========= 00:09:30.826 00:09:30.826 Arbitration 00:09:30.826 =========== 00:09:30.826 Arbitration Burst: no limit 00:09:30.826 00:09:30.826 Power Management 00:09:30.826 ================ 00:09:30.826 Number of Power States: 1 00:09:30.826 Current Power State: Power State #0 00:09:30.826 Power State #0: 00:09:30.826 Max Power: 25.00 W 00:09:30.826 Non-Operational State: Operational 00:09:30.826 Entry Latency: 16 microseconds 00:09:30.826 Exit Latency: 4 microseconds 00:09:30.826 Relative Read Throughput: 0 00:09:30.826 Relative Read Latency: 0 00:09:30.826 Relative Write Throughput: 0 00:09:30.826 Relative Write Latency: 0 00:09:30.826 Idle Power: Not Reported 00:09:30.826 Active Power: Not Reported 00:09:30.826 Non-Operational Permissive Mode: Not Supported 00:09:30.826 00:09:30.826 Health Information 00:09:30.826 ================== 00:09:30.826 Critical Warnings: 00:09:30.826 Available Spare Space: OK 00:09:30.826 Temperature: OK 00:09:30.826 Device Reliability: OK 00:09:30.826 Read Only: No 00:09:30.826 Volatile Memory Backup: OK 00:09:30.826 Current Temperature: 323 Kelvin (50 Celsius) 00:09:30.826 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:30.826 Available Spare: 0% 00:09:30.826 Available Spare Threshold: 0% 00:09:30.826 Life Percentage Used: 0% 00:09:30.826 Data Units Read: 841 00:09:30.826 Data Units Written: 770 00:09:30.826 Host Read Commands: 36471 00:09:30.826 Host Write Commands: 35894 00:09:30.826 Controller Busy Time: 0 minutes 00:09:30.826 Power Cycles: 0 00:09:30.826 Power On Hours: 0 hours 00:09:30.826 Unsafe Shutdowns: 0 00:09:30.826 Unrecoverable Media Errors: 0 00:09:30.826 Lifetime Error Log Entries: 0 00:09:30.826 Warning Temperature Time: 0 minutes 00:09:30.826 Critical Temperature Time: 0 minutes 00:09:30.826 00:09:30.826 Number of Queues 00:09:30.826 ================ 00:09:30.826 Number of I/O Submission Queues: 64 00:09:30.826 Number of I/O Completion Queues: 64 00:09:30.826 00:09:30.826 ZNS Specific Controller Data 00:09:30.826 ============================ 00:09:30.826 Zone Append Size Limit: 0 00:09:30.826 00:09:30.826 00:09:30.826 Active Namespaces 00:09:30.826 ================= 00:09:30.826 Namespace ID:1 00:09:30.826 Error Recovery Timeout: Unlimited 00:09:30.826 Command Set Identifier: NVM (00h) 00:09:30.826 Deallocate: Supported 00:09:30.826 Deallocated/Unwritten Error: Supported 00:09:30.826 Deallocated Read Value: All 0x00 00:09:30.826 Deallocate in Write Zeroes: Not Supported 00:09:30.826 Deallocated Guard Field: 0xFFFF 00:09:30.826 Flush: Supported 00:09:30.826 Reservation: Not Supported 00:09:30.826 Namespace Sharing Capabilities: Multiple Controllers 00:09:30.826 Size (in LBAs): 262144 (1GiB) 00:09:30.826 Capacity (in LBAs): 262144 (1GiB) 00:09:30.826 Utilization (in LBAs): 262144 (1GiB) 00:09:30.826 Thin Provisioning: Not Supported 00:09:30.826 Per-NS Atomic Units: No 00:09:30.826 Maximum Single Source Range Length: 128 00:09:30.826 Maximum Copy Length: 128 00:09:30.826 Maximum Source Range Count: 128 00:09:30.826 NGUID/EUI64 Never Reused: No 00:09:30.826 Namespace Write Protected: No 00:09:30.826 Endurance group ID: 1 00:09:30.826 Number of LBA Formats: 8 00:09:30.826 Current LBA Format: LBA Format #04 00:09:30.826 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:30.826 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:30.826 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:30.826 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:30.826 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:30.826 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:30.826 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:30.826 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:30.826 00:09:30.826 Get Feature FDP: 00:09:30.826 ================ 00:09:30.826 Enabled: Yes 00:09:30.826 FDP configuration index: 0 00:09:30.826 00:09:30.826 FDP configurations log page 00:09:30.826 =========================== 00:09:30.827 Number of FDP configurations: 1 00:09:30.827 Version: 0 00:09:30.827 Size: 112 00:09:30.827 FDP Configuration Descriptor: 0 00:09:30.827 Descriptor Size: 96 00:09:30.827 Reclaim Group Identifier format: 2 00:09:30.827 FDP Volatile Write Cache: Not Present 00:09:30.827 FDP Configuration: Valid 00:09:30.827 Vendor Specific Size: 0 00:09:30.827 Number of Reclaim Groups: 2 00:09:30.827 Number of Recalim Unit Handles: 8 00:09:30.827 Max Placement Identifiers: 128 00:09:30.827 Number of Namespaces Suppprted: 256 00:09:30.827 Reclaim unit Nominal Size: 6000000 bytes 00:09:30.827 Estimated Reclaim Unit Time Limit: Not Reported 00:09:30.827 RUH Desc #000: RUH Type: Initially Isolated 00:09:30.827 RUH Desc #001: RUH Type: Initially Isolated 00:09:30.827 RUH Desc #002: RUH Type: Initially Isolated 00:09:30.827 RUH Desc #003: RUH Type: Initially Isolated 00:09:30.827 RUH Desc #004: RUH Type: Initially Isolated 00:09:30.827 RUH Desc #005: RUH Type: Initially Isolated 00:09:30.827 RUH Desc #006: RUH Type: Initially Isolated 00:09:30.827 RUH Desc #007: RUH Type: Initially Isolated 00:09:30.827 00:09:30.827 FDP reclaim unit handle usage log page 00:09:30.827 ====================================== 00:09:30.827 Number of Reclaim Unit Handles: 8 00:09:30.827 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:30.827 RUH Usage Desc #001: RUH Attributes: Unused 00:09:30.827 RUH Usage Desc #002: RUH Attributes: Unused 00:09:30.827 RUH Usage Desc #003: RUH Attributes: Unused 00:09:30.827 RUH Usage Desc #004: RUH Attributes: Unused 00:09:30.827 RUH Usage Desc #005: RUH Attributes: Unused 00:09:30.827 RUH Usage Desc #006: RUH Attributes: Unused 00:09:30.827 RUH Usage Desc #007: RUH Attributes: Unused 00:09:30.827 00:09:30.827 FDP statistics log page 00:09:30.827 ======================= 00:09:30.827 Host bytes with metadata written: 489398272 00:09:30.827 Media bytes with metadata written: 489451520 00:09:30.827 Media bytes erased: 0 00:09:30.827 00:09:30.827 FDP events log page 00:09:30.827 =================== 00:09:30.827 Number of FDP events: 0 00:09:30.827 00:09:30.827 NVM Specific Namespace Data 00:09:30.827 =========================== 00:09:30.827 Logical Block Storage Tag Mask: 0 00:09:30.827 Protection Information Capabilities: 00:09:30.827 16b Guard Protection Information Storage Tag Support: No 00:09:30.827 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:30.827 Storage Tag Check Read Support: No 00:09:30.827 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:30.827 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:30.827 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:30.827 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:30.827 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:30.827 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:30.827 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:30.827 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:30.827 00:09:30.827 real 0m2.077s 00:09:30.827 user 0m0.760s 00:09:30.827 sys 0m1.054s 00:09:30.827 15:02:31 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:30.827 15:02:31 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:09:30.827 ************************************ 00:09:30.827 END TEST nvme_identify 00:09:30.827 ************************************ 00:09:31.086 15:02:31 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:09:31.086 15:02:31 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:31.086 15:02:31 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:31.086 15:02:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:31.086 ************************************ 00:09:31.086 START TEST nvme_perf 00:09:31.086 ************************************ 00:09:31.086 15:02:31 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:09:31.086 15:02:31 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:09:32.512 Initializing NVMe Controllers 00:09:32.512 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:32.512 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:32.512 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:32.512 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:32.513 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:32.513 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:32.513 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:32.513 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:32.513 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:32.513 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:32.513 Initialization complete. Launching workers. 00:09:32.513 ======================================================== 00:09:32.513 Latency(us) 00:09:32.513 Device Information : IOPS MiB/s Average min max 00:09:32.513 PCIE (0000:00:10.0) NSID 1 from core 0: 7058.70 82.72 18209.30 9140.40 54730.18 00:09:32.513 PCIE (0000:00:11.0) NSID 1 from core 0: 7058.70 82.72 18179.21 9178.54 52222.76 00:09:32.513 PCIE (0000:00:13.0) NSID 1 from core 0: 7058.70 82.72 18145.08 8937.99 50568.09 00:09:32.513 PCIE (0000:00:12.0) NSID 1 from core 0: 7058.70 82.72 18112.34 8795.05 48187.36 00:09:32.513 PCIE (0000:00:12.0) NSID 2 from core 0: 7058.70 82.72 18078.67 8839.83 45717.57 00:09:32.513 PCIE (0000:00:12.0) NSID 3 from core 0: 7058.70 82.72 18040.70 8857.74 42945.69 00:09:32.513 ======================================================== 00:09:32.513 Total : 42352.18 496.31 18127.55 8795.05 54730.18 00:09:32.513 00:09:32.513 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:32.513 ================================================================================= 00:09:32.513 1.00000% : 9580.363us 00:09:32.513 10.00000% : 11159.544us 00:09:32.513 25.00000% : 13686.233us 00:09:32.513 50.00000% : 17792.103us 00:09:32.513 75.00000% : 21476.858us 00:09:32.513 90.00000% : 25161.613us 00:09:32.513 95.00000% : 27583.023us 00:09:32.513 98.00000% : 30530.827us 00:09:32.513 99.00000% : 40216.469us 00:09:32.513 99.50000% : 53271.030us 00:09:32.513 99.90000% : 54744.932us 00:09:32.513 99.99000% : 54744.932us 00:09:32.513 99.99900% : 54744.932us 00:09:32.513 99.99990% : 54744.932us 00:09:32.513 99.99999% : 54744.932us 00:09:32.513 00:09:32.513 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:32.513 ================================================================================= 00:09:32.513 1.00000% : 9685.642us 00:09:32.513 10.00000% : 11212.183us 00:09:32.513 25.00000% : 13580.954us 00:09:32.513 50.00000% : 17897.382us 00:09:32.513 75.00000% : 21476.858us 00:09:32.513 90.00000% : 25372.170us 00:09:32.513 95.00000% : 27372.466us 00:09:32.513 98.00000% : 29899.155us 00:09:32.513 99.00000% : 38742.567us 00:09:32.513 99.50000% : 50954.898us 00:09:32.513 99.90000% : 52007.685us 00:09:32.513 99.99000% : 52428.800us 00:09:32.513 99.99900% : 52428.800us 00:09:32.513 99.99990% : 52428.800us 00:09:32.513 99.99999% : 52428.800us 00:09:32.513 00:09:32.513 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:32.513 ================================================================================= 00:09:32.513 1.00000% : 9475.084us 00:09:32.513 10.00000% : 11580.659us 00:09:32.513 25.00000% : 13896.790us 00:09:32.513 50.00000% : 17581.545us 00:09:32.513 75.00000% : 21476.858us 00:09:32.513 90.00000% : 24845.777us 00:09:32.513 95.00000% : 27372.466us 00:09:32.513 98.00000% : 29688.598us 00:09:32.513 99.00000% : 38532.010us 00:09:32.513 99.50000% : 49270.439us 00:09:32.513 99.90000% : 50323.226us 00:09:32.513 99.99000% : 50744.341us 00:09:32.513 99.99900% : 50744.341us 00:09:32.513 99.99990% : 50744.341us 00:09:32.513 99.99999% : 50744.341us 00:09:32.513 00:09:32.513 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:32.513 ================================================================================= 00:09:32.513 1.00000% : 9422.445us 00:09:32.513 10.00000% : 11633.298us 00:09:32.513 25.00000% : 13896.790us 00:09:32.513 50.00000% : 17581.545us 00:09:32.513 75.00000% : 21687.415us 00:09:32.513 90.00000% : 24845.777us 00:09:32.513 95.00000% : 27583.023us 00:09:32.513 98.00000% : 29688.598us 00:09:32.513 99.00000% : 36847.550us 00:09:32.513 99.50000% : 46743.749us 00:09:32.513 99.90000% : 48007.094us 00:09:32.513 99.99000% : 48217.651us 00:09:32.513 99.99900% : 48217.651us 00:09:32.513 99.99990% : 48217.651us 00:09:32.513 99.99999% : 48217.651us 00:09:32.513 00:09:32.513 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:32.513 ================================================================================= 00:09:32.513 1.00000% : 9369.806us 00:09:32.513 10.00000% : 11580.659us 00:09:32.513 25.00000% : 13896.790us 00:09:32.513 50.00000% : 17476.267us 00:09:32.513 75.00000% : 21687.415us 00:09:32.513 90.00000% : 25161.613us 00:09:32.513 95.00000% : 27583.023us 00:09:32.513 98.00000% : 29688.598us 00:09:32.513 99.00000% : 34741.976us 00:09:32.513 99.50000% : 44427.618us 00:09:32.513 99.90000% : 45480.405us 00:09:32.513 99.99000% : 45901.520us 00:09:32.513 99.99900% : 45901.520us 00:09:32.513 99.99990% : 45901.520us 00:09:32.513 99.99999% : 45901.520us 00:09:32.513 00:09:32.513 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:32.513 ================================================================================= 00:09:32.513 1.00000% : 9475.084us 00:09:32.513 10.00000% : 11422.741us 00:09:32.513 25.00000% : 13791.512us 00:09:32.513 50.00000% : 17581.545us 00:09:32.513 75.00000% : 21582.137us 00:09:32.513 90.00000% : 25266.892us 00:09:32.513 95.00000% : 27583.023us 00:09:32.513 98.00000% : 30741.385us 00:09:32.513 99.00000% : 33057.516us 00:09:32.513 99.50000% : 41479.814us 00:09:32.513 99.90000% : 42743.158us 00:09:32.513 99.99000% : 42953.716us 00:09:32.513 99.99900% : 42953.716us 00:09:32.513 99.99990% : 42953.716us 00:09:32.513 99.99999% : 42953.716us 00:09:32.513 00:09:32.513 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:32.513 ============================================================================== 00:09:32.513 Range in us Cumulative IO count 00:09:32.513 9106.609 - 9159.248: 0.0141% ( 1) 00:09:32.513 9159.248 - 9211.888: 0.1267% ( 8) 00:09:32.513 9211.888 - 9264.527: 0.2393% ( 8) 00:09:32.513 9264.527 - 9317.166: 0.3378% ( 7) 00:09:32.513 9317.166 - 9369.806: 0.4364% ( 7) 00:09:32.513 9369.806 - 9422.445: 0.5771% ( 10) 00:09:32.513 9422.445 - 9475.084: 0.6475% ( 5) 00:09:32.513 9475.084 - 9527.724: 0.8164% ( 12) 00:09:32.513 9527.724 - 9580.363: 1.0135% ( 14) 00:09:32.513 9580.363 - 9633.002: 1.1824% ( 12) 00:09:32.513 9633.002 - 9685.642: 1.3654% ( 13) 00:09:32.513 9685.642 - 9738.281: 1.6047% ( 17) 00:09:32.513 9738.281 - 9790.920: 1.9003% ( 21) 00:09:32.513 9790.920 - 9843.560: 2.1396% ( 17) 00:09:32.513 9843.560 - 9896.199: 2.3789% ( 17) 00:09:32.513 9896.199 - 9948.839: 2.7449% ( 26) 00:09:32.513 9948.839 - 10001.478: 3.0828% ( 24) 00:09:32.513 10001.478 - 10054.117: 3.3643% ( 20) 00:09:32.513 10054.117 - 10106.757: 3.7162% ( 25) 00:09:32.513 10106.757 - 10159.396: 4.0822% ( 26) 00:09:32.513 10159.396 - 10212.035: 4.4060% ( 23) 00:09:32.513 10212.035 - 10264.675: 4.7438% ( 24) 00:09:32.513 10264.675 - 10317.314: 5.1520% ( 29) 00:09:32.513 10317.314 - 10369.953: 5.5462% ( 28) 00:09:32.513 10369.953 - 10422.593: 5.8277% ( 20) 00:09:32.513 10422.593 - 10475.232: 6.2500% ( 30) 00:09:32.513 10475.232 - 10527.871: 6.5878% ( 24) 00:09:32.513 10527.871 - 10580.511: 6.8975% ( 22) 00:09:32.513 10580.511 - 10633.150: 7.1509% ( 18) 00:09:32.513 10633.150 - 10685.790: 7.4184% ( 19) 00:09:32.513 10685.790 - 10738.429: 7.6858% ( 19) 00:09:32.513 10738.429 - 10791.068: 7.9392% ( 18) 00:09:32.513 10791.068 - 10843.708: 8.2207% ( 20) 00:09:32.513 10843.708 - 10896.347: 8.5023% ( 20) 00:09:32.513 10896.347 - 10948.986: 8.7697% ( 19) 00:09:32.513 10948.986 - 11001.626: 9.1498% ( 27) 00:09:32.513 11001.626 - 11054.265: 9.4735% ( 23) 00:09:32.513 11054.265 - 11106.904: 9.8114% ( 24) 00:09:32.513 11106.904 - 11159.544: 10.1070% ( 21) 00:09:32.513 11159.544 - 11212.183: 10.4026% ( 21) 00:09:32.513 11212.183 - 11264.822: 10.6419% ( 17) 00:09:32.513 11264.822 - 11317.462: 10.8953% ( 18) 00:09:32.513 11317.462 - 11370.101: 11.0642% ( 12) 00:09:32.513 11370.101 - 11422.741: 11.2753% ( 15) 00:09:32.513 11422.741 - 11475.380: 11.5146% ( 17) 00:09:32.513 11475.380 - 11528.019: 11.7399% ( 16) 00:09:32.513 11528.019 - 11580.659: 11.9088% ( 12) 00:09:32.513 11580.659 - 11633.298: 12.1903% ( 20) 00:09:32.513 11633.298 - 11685.937: 12.4155% ( 16) 00:09:32.513 11685.937 - 11738.577: 12.6971% ( 20) 00:09:32.513 11738.577 - 11791.216: 12.9505% ( 18) 00:09:32.513 11791.216 - 11843.855: 13.2601% ( 22) 00:09:32.513 11843.855 - 11896.495: 13.5980% ( 24) 00:09:32.513 11896.495 - 11949.134: 13.9780% ( 27) 00:09:32.513 11949.134 - 12001.773: 14.2736% ( 21) 00:09:32.513 12001.773 - 12054.413: 14.6115% ( 24) 00:09:32.513 12054.413 - 12107.052: 14.9352% ( 23) 00:09:32.513 12107.052 - 12159.692: 15.2309% ( 21) 00:09:32.513 12159.692 - 12212.331: 15.5828% ( 25) 00:09:32.513 12212.331 - 12264.970: 15.8643% ( 20) 00:09:32.513 12264.970 - 12317.610: 16.1740% ( 22) 00:09:32.513 12317.610 - 12370.249: 16.4837% ( 22) 00:09:32.513 12370.249 - 12422.888: 16.8497% ( 26) 00:09:32.513 12422.888 - 12475.528: 17.1030% ( 18) 00:09:32.513 12475.528 - 12528.167: 17.4831% ( 27) 00:09:32.513 12528.167 - 12580.806: 17.8209% ( 24) 00:09:32.513 12580.806 - 12633.446: 18.2432% ( 30) 00:09:32.513 12633.446 - 12686.085: 18.6796% ( 31) 00:09:32.513 12686.085 - 12738.724: 19.1019% ( 30) 00:09:32.513 12738.724 - 12791.364: 19.4820% ( 27) 00:09:32.513 12791.364 - 12844.003: 19.9465% ( 33) 00:09:32.514 12844.003 - 12896.643: 20.3266% ( 27) 00:09:32.514 12896.643 - 12949.282: 20.6785% ( 25) 00:09:32.514 12949.282 - 13001.921: 21.0586% ( 27) 00:09:32.514 13001.921 - 13054.561: 21.3682% ( 22) 00:09:32.514 13054.561 - 13107.200: 21.7202% ( 25) 00:09:32.514 13107.200 - 13159.839: 22.0580% ( 24) 00:09:32.514 13159.839 - 13212.479: 22.4240% ( 26) 00:09:32.514 13212.479 - 13265.118: 22.7477% ( 23) 00:09:32.514 13265.118 - 13317.757: 23.0715% ( 23) 00:09:32.514 13317.757 - 13370.397: 23.4516% ( 27) 00:09:32.514 13370.397 - 13423.036: 23.7331% ( 20) 00:09:32.514 13423.036 - 13475.676: 24.1273% ( 28) 00:09:32.514 13475.676 - 13580.954: 24.7607% ( 45) 00:09:32.514 13580.954 - 13686.233: 25.4786% ( 51) 00:09:32.514 13686.233 - 13791.512: 26.1684% ( 49) 00:09:32.514 13791.512 - 13896.790: 26.7877% ( 44) 00:09:32.514 13896.790 - 14002.069: 27.6182% ( 59) 00:09:32.514 14002.069 - 14107.348: 28.3784% ( 54) 00:09:32.514 14107.348 - 14212.627: 29.0963% ( 51) 00:09:32.514 14212.627 - 14317.905: 30.0253% ( 66) 00:09:32.514 14317.905 - 14423.184: 30.7995% ( 55) 00:09:32.514 14423.184 - 14528.463: 31.5878% ( 56) 00:09:32.514 14528.463 - 14633.741: 32.3339% ( 53) 00:09:32.514 14633.741 - 14739.020: 33.1644% ( 59) 00:09:32.514 14739.020 - 14844.299: 33.8964% ( 52) 00:09:32.514 14844.299 - 14949.578: 34.5721% ( 48) 00:09:32.514 14949.578 - 15054.856: 35.3322% ( 54) 00:09:32.514 15054.856 - 15160.135: 35.9375% ( 43) 00:09:32.514 15160.135 - 15265.414: 36.6413% ( 50) 00:09:32.514 15265.414 - 15370.692: 37.2748% ( 45) 00:09:32.514 15370.692 - 15475.971: 37.8238% ( 39) 00:09:32.514 15475.971 - 15581.250: 38.4713% ( 46) 00:09:32.514 15581.250 - 15686.529: 39.0907% ( 44) 00:09:32.514 15686.529 - 15791.807: 39.6959% ( 43) 00:09:32.514 15791.807 - 15897.086: 40.2449% ( 39) 00:09:32.514 15897.086 - 16002.365: 40.7235% ( 34) 00:09:32.514 16002.365 - 16107.643: 41.2444% ( 37) 00:09:32.514 16107.643 - 16212.922: 41.7370% ( 35) 00:09:32.514 16212.922 - 16318.201: 42.1875% ( 32) 00:09:32.514 16318.201 - 16423.480: 42.7083% ( 37) 00:09:32.514 16423.480 - 16528.758: 43.1729% ( 33) 00:09:32.514 16528.758 - 16634.037: 43.7218% ( 39) 00:09:32.514 16634.037 - 16739.316: 44.2145% ( 35) 00:09:32.514 16739.316 - 16844.594: 44.7354% ( 37) 00:09:32.514 16844.594 - 16949.873: 45.2984% ( 40) 00:09:32.514 16949.873 - 17055.152: 45.9600% ( 47) 00:09:32.514 17055.152 - 17160.431: 46.7202% ( 54) 00:09:32.514 17160.431 - 17265.709: 47.3395% ( 44) 00:09:32.514 17265.709 - 17370.988: 48.0011% ( 47) 00:09:32.514 17370.988 - 17476.267: 48.6486% ( 46) 00:09:32.514 17476.267 - 17581.545: 49.2962% ( 46) 00:09:32.514 17581.545 - 17686.824: 49.9155% ( 44) 00:09:32.514 17686.824 - 17792.103: 50.5912% ( 48) 00:09:32.514 17792.103 - 17897.382: 51.2810% ( 49) 00:09:32.514 17897.382 - 18002.660: 51.8440% ( 40) 00:09:32.514 18002.660 - 18107.939: 52.4493% ( 43) 00:09:32.514 18107.939 - 18213.218: 53.0265% ( 41) 00:09:32.514 18213.218 - 18318.496: 53.6740% ( 46) 00:09:32.514 18318.496 - 18423.775: 54.2652% ( 42) 00:09:32.514 18423.775 - 18529.054: 54.8986% ( 45) 00:09:32.514 18529.054 - 18634.333: 55.4054% ( 36) 00:09:32.514 18634.333 - 18739.611: 56.0107% ( 43) 00:09:32.514 18739.611 - 18844.890: 56.6582% ( 46) 00:09:32.514 18844.890 - 18950.169: 57.2917% ( 45) 00:09:32.514 18950.169 - 19055.447: 57.9814% ( 49) 00:09:32.514 19055.447 - 19160.726: 58.8401% ( 61) 00:09:32.514 19160.726 - 19266.005: 59.6565% ( 58) 00:09:32.514 19266.005 - 19371.284: 60.3604% ( 50) 00:09:32.514 19371.284 - 19476.562: 61.0923% ( 52) 00:09:32.514 19476.562 - 19581.841: 61.8525% ( 54) 00:09:32.514 19581.841 - 19687.120: 62.5563% ( 50) 00:09:32.514 19687.120 - 19792.398: 63.1898% ( 45) 00:09:32.514 19792.398 - 19897.677: 63.8795% ( 49) 00:09:32.514 19897.677 - 20002.956: 64.4003% ( 37) 00:09:32.514 20002.956 - 20108.235: 65.0479% ( 46) 00:09:32.514 20108.235 - 20213.513: 65.8502% ( 57) 00:09:32.514 20213.513 - 20318.792: 66.6244% ( 55) 00:09:32.514 20318.792 - 20424.071: 67.3423% ( 51) 00:09:32.514 20424.071 - 20529.349: 68.0743% ( 52) 00:09:32.514 20529.349 - 20634.628: 68.9189% ( 60) 00:09:32.514 20634.628 - 20739.907: 69.6650% ( 53) 00:09:32.514 20739.907 - 20845.186: 70.5096% ( 60) 00:09:32.514 20845.186 - 20950.464: 71.3542% ( 60) 00:09:32.514 20950.464 - 21055.743: 72.2410% ( 63) 00:09:32.514 21055.743 - 21161.022: 72.9870% ( 53) 00:09:32.514 21161.022 - 21266.300: 73.7613% ( 55) 00:09:32.514 21266.300 - 21371.579: 74.6059% ( 60) 00:09:32.514 21371.579 - 21476.858: 75.4223% ( 58) 00:09:32.514 21476.858 - 21582.137: 76.2247% ( 57) 00:09:32.514 21582.137 - 21687.415: 76.8722% ( 46) 00:09:32.514 21687.415 - 21792.694: 77.5901% ( 51) 00:09:32.514 21792.694 - 21897.973: 78.1250% ( 38) 00:09:32.514 21897.973 - 22003.251: 78.7021% ( 41) 00:09:32.514 22003.251 - 22108.530: 79.2089% ( 36) 00:09:32.514 22108.530 - 22213.809: 79.7157% ( 36) 00:09:32.514 22213.809 - 22319.088: 80.2928% ( 41) 00:09:32.514 22319.088 - 22424.366: 80.8418% ( 39) 00:09:32.514 22424.366 - 22529.645: 81.3204% ( 34) 00:09:32.514 22529.645 - 22634.924: 81.7427% ( 30) 00:09:32.514 22634.924 - 22740.202: 82.1650% ( 30) 00:09:32.514 22740.202 - 22845.481: 82.5450% ( 27) 00:09:32.514 22845.481 - 22950.760: 82.8547% ( 22) 00:09:32.514 22950.760 - 23056.039: 83.3193% ( 33) 00:09:32.514 23056.039 - 23161.317: 83.6712% ( 25) 00:09:32.514 23161.317 - 23266.596: 83.9668% ( 21) 00:09:32.514 23266.596 - 23371.875: 84.3187% ( 25) 00:09:32.514 23371.875 - 23477.153: 84.6706% ( 25) 00:09:32.514 23477.153 - 23582.432: 85.0366% ( 26) 00:09:32.514 23582.432 - 23687.711: 85.3322% ( 21) 00:09:32.514 23687.711 - 23792.990: 85.6841% ( 25) 00:09:32.514 23792.990 - 23898.268: 86.0783% ( 28) 00:09:32.514 23898.268 - 24003.547: 86.4865% ( 29) 00:09:32.514 24003.547 - 24108.826: 86.8243% ( 24) 00:09:32.514 24108.826 - 24214.104: 87.1762% ( 25) 00:09:32.514 24214.104 - 24319.383: 87.5282% ( 25) 00:09:32.514 24319.383 - 24424.662: 87.9082% ( 27) 00:09:32.514 24424.662 - 24529.941: 88.3024% ( 28) 00:09:32.514 24529.941 - 24635.219: 88.5980% ( 21) 00:09:32.514 24635.219 - 24740.498: 88.8936% ( 21) 00:09:32.514 24740.498 - 24845.777: 89.2033% ( 22) 00:09:32.514 24845.777 - 24951.055: 89.4989% ( 21) 00:09:32.514 24951.055 - 25056.334: 89.7804% ( 20) 00:09:32.514 25056.334 - 25161.613: 90.0619% ( 20) 00:09:32.514 25161.613 - 25266.892: 90.3857% ( 23) 00:09:32.514 25266.892 - 25372.170: 90.6672% ( 20) 00:09:32.514 25372.170 - 25477.449: 90.9628% ( 21) 00:09:32.514 25477.449 - 25582.728: 91.2584% ( 21) 00:09:32.514 25582.728 - 25688.006: 91.4977% ( 17) 00:09:32.514 25688.006 - 25793.285: 91.7934% ( 21) 00:09:32.514 25793.285 - 25898.564: 92.0608% ( 19) 00:09:32.514 25898.564 - 26003.843: 92.2438% ( 13) 00:09:32.514 26003.843 - 26109.121: 92.5676% ( 23) 00:09:32.514 26109.121 - 26214.400: 92.7928% ( 16) 00:09:32.514 26214.400 - 26319.679: 92.9758% ( 13) 00:09:32.514 26319.679 - 26424.957: 93.2151% ( 17) 00:09:32.514 26424.957 - 26530.236: 93.4403% ( 16) 00:09:32.514 26530.236 - 26635.515: 93.6515% ( 15) 00:09:32.514 26635.515 - 26740.794: 93.8485% ( 14) 00:09:32.514 26740.794 - 26846.072: 94.0034% ( 11) 00:09:32.514 26846.072 - 26951.351: 94.1864% ( 13) 00:09:32.514 26951.351 - 27161.908: 94.5242% ( 24) 00:09:32.514 27161.908 - 27372.466: 94.8480% ( 23) 00:09:32.514 27372.466 - 27583.023: 95.1014% ( 18) 00:09:32.514 27583.023 - 27793.581: 95.4673% ( 26) 00:09:32.514 27793.581 - 28004.138: 95.7770% ( 22) 00:09:32.514 28004.138 - 28214.696: 96.0726% ( 21) 00:09:32.514 28214.696 - 28425.253: 96.3682% ( 21) 00:09:32.514 28425.253 - 28635.810: 96.6639% ( 21) 00:09:32.514 28635.810 - 28846.368: 96.9313% ( 19) 00:09:32.514 28846.368 - 29056.925: 97.1988% ( 19) 00:09:32.514 29056.925 - 29267.483: 97.4240% ( 16) 00:09:32.514 29267.483 - 29478.040: 97.6211% ( 14) 00:09:32.514 29478.040 - 29688.598: 97.7900% ( 12) 00:09:32.514 29688.598 - 29899.155: 97.8885% ( 7) 00:09:32.514 29899.155 - 30109.712: 97.9307% ( 3) 00:09:32.514 30109.712 - 30320.270: 97.9730% ( 3) 00:09:32.514 30320.270 - 30530.827: 98.0152% ( 3) 00:09:32.514 30530.827 - 30741.385: 98.0574% ( 3) 00:09:32.514 30741.385 - 30951.942: 98.0997% ( 3) 00:09:32.514 30951.942 - 31162.500: 98.1278% ( 2) 00:09:32.514 31162.500 - 31373.057: 98.1700% ( 3) 00:09:32.514 31373.057 - 31583.614: 98.1982% ( 2) 00:09:32.514 37479.222 - 37689.780: 98.2123% ( 1) 00:09:32.514 37689.780 - 37900.337: 98.2967% ( 6) 00:09:32.514 37900.337 - 38110.895: 98.3390% ( 3) 00:09:32.514 38110.895 - 38321.452: 98.4093% ( 5) 00:09:32.514 38321.452 - 38532.010: 98.4797% ( 5) 00:09:32.514 38532.010 - 38742.567: 98.5501% ( 5) 00:09:32.514 38742.567 - 38953.124: 98.6064% ( 4) 00:09:32.514 38953.124 - 39163.682: 98.6768% ( 5) 00:09:32.514 39163.682 - 39374.239: 98.7472% ( 5) 00:09:32.514 39374.239 - 39584.797: 98.8176% ( 5) 00:09:32.514 39584.797 - 39795.354: 98.8739% ( 4) 00:09:32.514 39795.354 - 40005.912: 98.9443% ( 5) 00:09:32.514 40005.912 - 40216.469: 99.0006% ( 4) 00:09:32.515 40216.469 - 40427.027: 99.0709% ( 5) 00:09:32.515 40427.027 - 40637.584: 99.0991% ( 2) 00:09:32.515 51586.570 - 51797.128: 99.1132% ( 1) 00:09:32.515 51797.128 - 52007.685: 99.1554% ( 3) 00:09:32.515 52007.685 - 52218.243: 99.2258% ( 5) 00:09:32.515 52218.243 - 52428.800: 99.2962% ( 5) 00:09:32.515 52428.800 - 52639.357: 99.3666% ( 5) 00:09:32.515 52639.357 - 52849.915: 99.4229% ( 4) 00:09:32.515 52849.915 - 53060.472: 99.4792% ( 4) 00:09:32.515 53060.472 - 53271.030: 99.5495% ( 5) 00:09:32.515 53271.030 - 53481.587: 99.6199% ( 5) 00:09:32.515 53481.587 - 53692.145: 99.6762% ( 4) 00:09:32.515 53692.145 - 53902.702: 99.7325% ( 4) 00:09:32.515 53902.702 - 54323.817: 99.8733% ( 10) 00:09:32.515 54323.817 - 54744.932: 100.0000% ( 9) 00:09:32.515 00:09:32.515 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:32.515 ============================================================================== 00:09:32.515 Range in us Cumulative IO count 00:09:32.515 9159.248 - 9211.888: 0.0422% ( 3) 00:09:32.515 9211.888 - 9264.527: 0.0563% ( 1) 00:09:32.515 9264.527 - 9317.166: 0.0985% ( 3) 00:09:32.515 9317.166 - 9369.806: 0.2252% ( 9) 00:09:32.515 9369.806 - 9422.445: 0.3378% ( 8) 00:09:32.515 9422.445 - 9475.084: 0.4505% ( 8) 00:09:32.515 9475.084 - 9527.724: 0.5771% ( 9) 00:09:32.515 9527.724 - 9580.363: 0.7179% ( 10) 00:09:32.515 9580.363 - 9633.002: 0.8587% ( 10) 00:09:32.515 9633.002 - 9685.642: 1.0135% ( 11) 00:09:32.515 9685.642 - 9738.281: 1.1543% ( 10) 00:09:32.515 9738.281 - 9790.920: 1.3091% ( 11) 00:09:32.515 9790.920 - 9843.560: 1.4358% ( 9) 00:09:32.515 9843.560 - 9896.199: 1.6329% ( 14) 00:09:32.515 9896.199 - 9948.839: 1.8300% ( 14) 00:09:32.515 9948.839 - 10001.478: 2.1537% ( 23) 00:09:32.515 10001.478 - 10054.117: 2.4916% ( 24) 00:09:32.515 10054.117 - 10106.757: 2.8435% ( 25) 00:09:32.515 10106.757 - 10159.396: 3.2235% ( 27) 00:09:32.515 10159.396 - 10212.035: 3.6318% ( 29) 00:09:32.515 10212.035 - 10264.675: 4.0541% ( 30) 00:09:32.515 10264.675 - 10317.314: 4.4200% ( 26) 00:09:32.515 10317.314 - 10369.953: 4.6875% ( 19) 00:09:32.515 10369.953 - 10422.593: 5.0253% ( 24) 00:09:32.515 10422.593 - 10475.232: 5.3209% ( 21) 00:09:32.515 10475.232 - 10527.871: 5.6025% ( 20) 00:09:32.515 10527.871 - 10580.511: 5.8699% ( 19) 00:09:32.515 10580.511 - 10633.150: 6.1515% ( 20) 00:09:32.515 10633.150 - 10685.790: 6.4330% ( 20) 00:09:32.515 10685.790 - 10738.429: 6.7145% ( 20) 00:09:32.515 10738.429 - 10791.068: 7.0383% ( 23) 00:09:32.515 10791.068 - 10843.708: 7.3620% ( 23) 00:09:32.515 10843.708 - 10896.347: 7.7421% ( 27) 00:09:32.515 10896.347 - 10948.986: 8.2066% ( 33) 00:09:32.515 10948.986 - 11001.626: 8.5586% ( 25) 00:09:32.515 11001.626 - 11054.265: 8.9949% ( 31) 00:09:32.515 11054.265 - 11106.904: 9.3328% ( 24) 00:09:32.515 11106.904 - 11159.544: 9.6988% ( 26) 00:09:32.515 11159.544 - 11212.183: 10.0648% ( 26) 00:09:32.515 11212.183 - 11264.822: 10.4307% ( 26) 00:09:32.515 11264.822 - 11317.462: 10.8108% ( 27) 00:09:32.515 11317.462 - 11370.101: 11.1909% ( 27) 00:09:32.515 11370.101 - 11422.741: 11.5569% ( 26) 00:09:32.515 11422.741 - 11475.380: 11.9651% ( 29) 00:09:32.515 11475.380 - 11528.019: 12.2889% ( 23) 00:09:32.515 11528.019 - 11580.659: 12.5985% ( 22) 00:09:32.515 11580.659 - 11633.298: 12.9364% ( 24) 00:09:32.515 11633.298 - 11685.937: 13.3164% ( 27) 00:09:32.515 11685.937 - 11738.577: 13.6543% ( 24) 00:09:32.515 11738.577 - 11791.216: 13.9780% ( 23) 00:09:32.515 11791.216 - 11843.855: 14.4144% ( 31) 00:09:32.515 11843.855 - 11896.495: 14.7945% ( 27) 00:09:32.515 11896.495 - 11949.134: 15.2027% ( 29) 00:09:32.515 11949.134 - 12001.773: 15.5828% ( 27) 00:09:32.515 12001.773 - 12054.413: 15.9347% ( 25) 00:09:32.515 12054.413 - 12107.052: 16.2866% ( 25) 00:09:32.515 12107.052 - 12159.692: 16.6385% ( 25) 00:09:32.515 12159.692 - 12212.331: 17.0327% ( 28) 00:09:32.515 12212.331 - 12264.970: 17.3564% ( 23) 00:09:32.515 12264.970 - 12317.610: 17.6239% ( 19) 00:09:32.515 12317.610 - 12370.249: 17.9195% ( 21) 00:09:32.515 12370.249 - 12422.888: 18.1869% ( 19) 00:09:32.515 12422.888 - 12475.528: 18.4966% ( 22) 00:09:32.515 12475.528 - 12528.167: 18.7922% ( 21) 00:09:32.515 12528.167 - 12580.806: 19.0738% ( 20) 00:09:32.515 12580.806 - 12633.446: 19.3694% ( 21) 00:09:32.515 12633.446 - 12686.085: 19.6368% ( 19) 00:09:32.515 12686.085 - 12738.724: 19.9043% ( 19) 00:09:32.515 12738.724 - 12791.364: 20.1577% ( 18) 00:09:32.515 12791.364 - 12844.003: 20.4392% ( 20) 00:09:32.515 12844.003 - 12896.643: 20.7911% ( 25) 00:09:32.515 12896.643 - 12949.282: 21.1149% ( 23) 00:09:32.515 12949.282 - 13001.921: 21.4527% ( 24) 00:09:32.515 13001.921 - 13054.561: 21.7342% ( 20) 00:09:32.515 13054.561 - 13107.200: 22.0580% ( 23) 00:09:32.515 13107.200 - 13159.839: 22.3395% ( 20) 00:09:32.515 13159.839 - 13212.479: 22.6211% ( 20) 00:09:32.515 13212.479 - 13265.118: 22.9167% ( 21) 00:09:32.515 13265.118 - 13317.757: 23.2967% ( 27) 00:09:32.515 13317.757 - 13370.397: 23.6768% ( 27) 00:09:32.515 13370.397 - 13423.036: 23.9865% ( 22) 00:09:32.515 13423.036 - 13475.676: 24.3525% ( 26) 00:09:32.515 13475.676 - 13580.954: 25.0845% ( 52) 00:09:32.515 13580.954 - 13686.233: 25.7742% ( 49) 00:09:32.515 13686.233 - 13791.512: 26.5484% ( 55) 00:09:32.515 13791.512 - 13896.790: 27.3226% ( 55) 00:09:32.515 13896.790 - 14002.069: 28.1813% ( 61) 00:09:32.515 14002.069 - 14107.348: 28.9274% ( 53) 00:09:32.515 14107.348 - 14212.627: 29.5890% ( 47) 00:09:32.515 14212.627 - 14317.905: 30.2787% ( 49) 00:09:32.515 14317.905 - 14423.184: 30.9544% ( 48) 00:09:32.515 14423.184 - 14528.463: 31.6441% ( 49) 00:09:32.515 14528.463 - 14633.741: 32.3761% ( 52) 00:09:32.515 14633.741 - 14739.020: 33.0518% ( 48) 00:09:32.515 14739.020 - 14844.299: 33.6571% ( 43) 00:09:32.515 14844.299 - 14949.578: 34.2342% ( 41) 00:09:32.515 14949.578 - 15054.856: 34.9381% ( 50) 00:09:32.515 15054.856 - 15160.135: 35.6560% ( 51) 00:09:32.515 15160.135 - 15265.414: 36.3176% ( 47) 00:09:32.515 15265.414 - 15370.692: 37.1059% ( 56) 00:09:32.515 15370.692 - 15475.971: 37.8519% ( 53) 00:09:32.515 15475.971 - 15581.250: 38.4431% ( 42) 00:09:32.515 15581.250 - 15686.529: 38.9921% ( 39) 00:09:32.515 15686.529 - 15791.807: 39.5270% ( 38) 00:09:32.515 15791.807 - 15897.086: 40.0479% ( 37) 00:09:32.515 15897.086 - 16002.365: 40.5546% ( 36) 00:09:32.515 16002.365 - 16107.643: 41.0191% ( 33) 00:09:32.515 16107.643 - 16212.922: 41.5400% ( 37) 00:09:32.515 16212.922 - 16318.201: 42.0186% ( 34) 00:09:32.515 16318.201 - 16423.480: 42.5676% ( 39) 00:09:32.515 16423.480 - 16528.758: 43.1306% ( 40) 00:09:32.515 16528.758 - 16634.037: 43.6796% ( 39) 00:09:32.515 16634.037 - 16739.316: 44.2005% ( 37) 00:09:32.515 16739.316 - 16844.594: 44.6791% ( 34) 00:09:32.515 16844.594 - 16949.873: 45.1577% ( 34) 00:09:32.515 16949.873 - 17055.152: 45.7066% ( 39) 00:09:32.515 17055.152 - 17160.431: 46.1571% ( 32) 00:09:32.515 17160.431 - 17265.709: 46.6075% ( 32) 00:09:32.515 17265.709 - 17370.988: 47.1284% ( 37) 00:09:32.515 17370.988 - 17476.267: 47.7900% ( 47) 00:09:32.515 17476.267 - 17581.545: 48.4234% ( 45) 00:09:32.515 17581.545 - 17686.824: 49.0991% ( 48) 00:09:32.515 17686.824 - 17792.103: 49.7185% ( 44) 00:09:32.515 17792.103 - 17897.382: 50.4082% ( 49) 00:09:32.515 17897.382 - 18002.660: 51.0417% ( 45) 00:09:32.515 18002.660 - 18107.939: 51.7314% ( 49) 00:09:32.515 18107.939 - 18213.218: 52.4493% ( 51) 00:09:32.515 18213.218 - 18318.496: 53.1813% ( 52) 00:09:32.515 18318.496 - 18423.775: 53.9837% ( 57) 00:09:32.515 18423.775 - 18529.054: 54.8142% ( 59) 00:09:32.515 18529.054 - 18634.333: 55.5321% ( 51) 00:09:32.515 18634.333 - 18739.611: 56.2359% ( 50) 00:09:32.515 18739.611 - 18844.890: 56.9116% ( 48) 00:09:32.515 18844.890 - 18950.169: 57.6577% ( 53) 00:09:32.515 18950.169 - 19055.447: 58.3615% ( 50) 00:09:32.515 19055.447 - 19160.726: 59.0231% ( 47) 00:09:32.515 19160.726 - 19266.005: 59.6988% ( 48) 00:09:32.515 19266.005 - 19371.284: 60.3041% ( 43) 00:09:32.515 19371.284 - 19476.562: 60.9657% ( 47) 00:09:32.515 19476.562 - 19581.841: 61.5991% ( 45) 00:09:32.515 19581.841 - 19687.120: 62.2044% ( 43) 00:09:32.515 19687.120 - 19792.398: 62.7956% ( 42) 00:09:32.515 19792.398 - 19897.677: 63.4994% ( 50) 00:09:32.515 19897.677 - 20002.956: 64.2173% ( 51) 00:09:32.515 20002.956 - 20108.235: 65.0197% ( 57) 00:09:32.516 20108.235 - 20213.513: 65.7658% ( 53) 00:09:32.516 20213.513 - 20318.792: 66.5681% ( 57) 00:09:32.516 20318.792 - 20424.071: 67.3283% ( 54) 00:09:32.516 20424.071 - 20529.349: 68.1025% ( 55) 00:09:32.516 20529.349 - 20634.628: 68.9048% ( 57) 00:09:32.516 20634.628 - 20739.907: 69.5664% ( 47) 00:09:32.516 20739.907 - 20845.186: 70.2843% ( 51) 00:09:32.516 20845.186 - 20950.464: 71.0586% ( 55) 00:09:32.516 20950.464 - 21055.743: 71.8187% ( 54) 00:09:32.516 21055.743 - 21161.022: 72.6351% ( 58) 00:09:32.516 21161.022 - 21266.300: 73.4938% ( 61) 00:09:32.516 21266.300 - 21371.579: 74.2962% ( 57) 00:09:32.516 21371.579 - 21476.858: 75.0985% ( 57) 00:09:32.516 21476.858 - 21582.137: 75.9009% ( 57) 00:09:32.516 21582.137 - 21687.415: 76.6610% ( 54) 00:09:32.516 21687.415 - 21792.694: 77.4634% ( 57) 00:09:32.516 21792.694 - 21897.973: 78.2235% ( 54) 00:09:32.516 21897.973 - 22003.251: 78.9555% ( 52) 00:09:32.516 22003.251 - 22108.530: 79.5467% ( 42) 00:09:32.516 22108.530 - 22213.809: 80.0816% ( 38) 00:09:32.516 22213.809 - 22319.088: 80.6306% ( 39) 00:09:32.516 22319.088 - 22424.366: 81.1937% ( 40) 00:09:32.516 22424.366 - 22529.645: 81.7145% ( 37) 00:09:32.516 22529.645 - 22634.924: 82.2213% ( 36) 00:09:32.516 22634.924 - 22740.202: 82.7280% ( 36) 00:09:32.516 22740.202 - 22845.481: 83.1363% ( 29) 00:09:32.516 22845.481 - 22950.760: 83.4600% ( 23) 00:09:32.516 22950.760 - 23056.039: 83.8260% ( 26) 00:09:32.516 23056.039 - 23161.317: 84.1216% ( 21) 00:09:32.516 23161.317 - 23266.596: 84.3750% ( 18) 00:09:32.516 23266.596 - 23371.875: 84.7128% ( 24) 00:09:32.516 23371.875 - 23477.153: 84.9944% ( 20) 00:09:32.516 23477.153 - 23582.432: 85.2900% ( 21) 00:09:32.516 23582.432 - 23687.711: 85.5574% ( 19) 00:09:32.516 23687.711 - 23792.990: 85.6982% ( 10) 00:09:32.516 23792.990 - 23898.268: 85.8812% ( 13) 00:09:32.516 23898.268 - 24003.547: 86.0783% ( 14) 00:09:32.516 24003.547 - 24108.826: 86.3739% ( 21) 00:09:32.516 24108.826 - 24214.104: 86.6836% ( 22) 00:09:32.516 24214.104 - 24319.383: 86.9651% ( 20) 00:09:32.516 24319.383 - 24424.662: 87.2044% ( 17) 00:09:32.516 24424.662 - 24529.941: 87.5704% ( 26) 00:09:32.516 24529.941 - 24635.219: 87.9082% ( 24) 00:09:32.516 24635.219 - 24740.498: 88.2883% ( 27) 00:09:32.516 24740.498 - 24845.777: 88.5980% ( 22) 00:09:32.516 24845.777 - 24951.055: 88.9640% ( 26) 00:09:32.516 24951.055 - 25056.334: 89.2736% ( 22) 00:09:32.516 25056.334 - 25161.613: 89.5270% ( 18) 00:09:32.516 25161.613 - 25266.892: 89.7945% ( 19) 00:09:32.516 25266.892 - 25372.170: 90.0197% ( 16) 00:09:32.516 25372.170 - 25477.449: 90.3435% ( 23) 00:09:32.516 25477.449 - 25582.728: 90.6250% ( 20) 00:09:32.516 25582.728 - 25688.006: 90.9488% ( 23) 00:09:32.516 25688.006 - 25793.285: 91.2303% ( 20) 00:09:32.516 25793.285 - 25898.564: 91.5400% ( 22) 00:09:32.516 25898.564 - 26003.843: 91.8497% ( 22) 00:09:32.516 26003.843 - 26109.121: 92.1593% ( 22) 00:09:32.516 26109.121 - 26214.400: 92.4409% ( 20) 00:09:32.516 26214.400 - 26319.679: 92.7224% ( 20) 00:09:32.516 26319.679 - 26424.957: 93.0743% ( 25) 00:09:32.516 26424.957 - 26530.236: 93.3277% ( 18) 00:09:32.516 26530.236 - 26635.515: 93.6092% ( 20) 00:09:32.516 26635.515 - 26740.794: 93.8485% ( 17) 00:09:32.516 26740.794 - 26846.072: 94.0597% ( 15) 00:09:32.516 26846.072 - 26951.351: 94.2708% ( 15) 00:09:32.516 26951.351 - 27161.908: 94.7072% ( 31) 00:09:32.516 27161.908 - 27372.466: 95.1154% ( 29) 00:09:32.516 27372.466 - 27583.023: 95.4955% ( 27) 00:09:32.516 27583.023 - 27793.581: 95.8333% ( 24) 00:09:32.516 27793.581 - 28004.138: 96.1571% ( 23) 00:09:32.516 28004.138 - 28214.696: 96.4668% ( 22) 00:09:32.516 28214.696 - 28425.253: 96.7905% ( 23) 00:09:32.516 28425.253 - 28635.810: 97.1284% ( 24) 00:09:32.516 28635.810 - 28846.368: 97.4099% ( 20) 00:09:32.516 28846.368 - 29056.925: 97.6070% ( 14) 00:09:32.516 29056.925 - 29267.483: 97.7759% ( 12) 00:09:32.516 29267.483 - 29478.040: 97.9167% ( 10) 00:09:32.516 29478.040 - 29688.598: 97.9730% ( 4) 00:09:32.516 29688.598 - 29899.155: 98.0152% ( 3) 00:09:32.516 29899.155 - 30109.712: 98.0715% ( 4) 00:09:32.516 30109.712 - 30320.270: 98.1278% ( 4) 00:09:32.516 30320.270 - 30530.827: 98.1700% ( 3) 00:09:32.516 30530.827 - 30741.385: 98.1982% ( 2) 00:09:32.516 36426.435 - 36636.993: 98.2827% ( 6) 00:09:32.516 36636.993 - 36847.550: 98.3390% ( 4) 00:09:32.516 36847.550 - 37058.108: 98.4234% ( 6) 00:09:32.516 37058.108 - 37268.665: 98.4938% ( 5) 00:09:32.516 37268.665 - 37479.222: 98.5642% ( 5) 00:09:32.516 37479.222 - 37689.780: 98.6346% ( 5) 00:09:32.516 37689.780 - 37900.337: 98.7050% ( 5) 00:09:32.516 37900.337 - 38110.895: 98.7753% ( 5) 00:09:32.516 38110.895 - 38321.452: 98.8457% ( 5) 00:09:32.516 38321.452 - 38532.010: 98.9302% ( 6) 00:09:32.516 38532.010 - 38742.567: 99.0006% ( 5) 00:09:32.516 38742.567 - 38953.124: 99.0709% ( 5) 00:09:32.516 38953.124 - 39163.682: 99.0991% ( 2) 00:09:32.516 49480.996 - 49691.553: 99.1554% ( 4) 00:09:32.516 49691.553 - 49902.111: 99.2258% ( 5) 00:09:32.516 49902.111 - 50112.668: 99.2962% ( 5) 00:09:32.516 50112.668 - 50323.226: 99.3525% ( 4) 00:09:32.516 50323.226 - 50533.783: 99.4088% ( 4) 00:09:32.516 50533.783 - 50744.341: 99.4792% ( 5) 00:09:32.516 50744.341 - 50954.898: 99.5495% ( 5) 00:09:32.516 50954.898 - 51165.455: 99.6059% ( 4) 00:09:32.516 51165.455 - 51376.013: 99.6762% ( 5) 00:09:32.516 51376.013 - 51586.570: 99.7466% ( 5) 00:09:32.516 51586.570 - 51797.128: 99.8311% ( 6) 00:09:32.516 51797.128 - 52007.685: 99.9155% ( 6) 00:09:32.516 52007.685 - 52218.243: 99.9859% ( 5) 00:09:32.516 52218.243 - 52428.800: 100.0000% ( 1) 00:09:32.516 00:09:32.516 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:32.516 ============================================================================== 00:09:32.516 Range in us Cumulative IO count 00:09:32.516 8896.051 - 8948.691: 0.0282% ( 2) 00:09:32.516 8948.691 - 9001.330: 0.0563% ( 2) 00:09:32.516 9001.330 - 9053.969: 0.0985% ( 3) 00:09:32.516 9053.969 - 9106.609: 0.1267% ( 2) 00:09:32.516 9106.609 - 9159.248: 0.2534% ( 9) 00:09:32.516 9159.248 - 9211.888: 0.3801% ( 9) 00:09:32.516 9211.888 - 9264.527: 0.4786% ( 7) 00:09:32.516 9264.527 - 9317.166: 0.6194% ( 10) 00:09:32.516 9317.166 - 9369.806: 0.7883% ( 12) 00:09:32.516 9369.806 - 9422.445: 0.9713% ( 13) 00:09:32.516 9422.445 - 9475.084: 1.1120% ( 10) 00:09:32.516 9475.084 - 9527.724: 1.2810% ( 12) 00:09:32.516 9527.724 - 9580.363: 1.4217% ( 10) 00:09:32.516 9580.363 - 9633.002: 1.5907% ( 12) 00:09:32.516 9633.002 - 9685.642: 1.7596% ( 12) 00:09:32.516 9685.642 - 9738.281: 1.9707% ( 15) 00:09:32.516 9738.281 - 9790.920: 2.1819% ( 15) 00:09:32.516 9790.920 - 9843.560: 2.4071% ( 16) 00:09:32.516 9843.560 - 9896.199: 2.6886% ( 20) 00:09:32.516 9896.199 - 9948.839: 2.9702% ( 20) 00:09:32.516 9948.839 - 10001.478: 3.2517% ( 20) 00:09:32.516 10001.478 - 10054.117: 3.5191% ( 19) 00:09:32.516 10054.117 - 10106.757: 3.8007% ( 20) 00:09:32.516 10106.757 - 10159.396: 4.0963% ( 21) 00:09:32.516 10159.396 - 10212.035: 4.3215% ( 16) 00:09:32.516 10212.035 - 10264.675: 4.5749% ( 18) 00:09:32.516 10264.675 - 10317.314: 4.8283% ( 18) 00:09:32.516 10317.314 - 10369.953: 5.0816% ( 18) 00:09:32.516 10369.953 - 10422.593: 5.3209% ( 17) 00:09:32.516 10422.593 - 10475.232: 5.5884% ( 19) 00:09:32.516 10475.232 - 10527.871: 5.8277% ( 17) 00:09:32.516 10527.871 - 10580.511: 6.0529% ( 16) 00:09:32.516 10580.511 - 10633.150: 6.2641% ( 15) 00:09:32.516 10633.150 - 10685.790: 6.4893% ( 16) 00:09:32.516 10685.790 - 10738.429: 6.7286% ( 17) 00:09:32.516 10738.429 - 10791.068: 6.9116% ( 13) 00:09:32.516 10791.068 - 10843.708: 7.1650% ( 18) 00:09:32.516 10843.708 - 10896.347: 7.4324% ( 19) 00:09:32.516 10896.347 - 10948.986: 7.6577% ( 16) 00:09:32.516 10948.986 - 11001.626: 7.8407% ( 13) 00:09:32.516 11001.626 - 11054.265: 8.0940% ( 18) 00:09:32.516 11054.265 - 11106.904: 8.2911% ( 14) 00:09:32.516 11106.904 - 11159.544: 8.5023% ( 15) 00:09:32.516 11159.544 - 11212.183: 8.6993% ( 14) 00:09:32.516 11212.183 - 11264.822: 8.8682% ( 12) 00:09:32.516 11264.822 - 11317.462: 9.0935% ( 16) 00:09:32.516 11317.462 - 11370.101: 9.2905% ( 14) 00:09:32.516 11370.101 - 11422.741: 9.4876% ( 14) 00:09:32.516 11422.741 - 11475.380: 9.6847% ( 14) 00:09:32.516 11475.380 - 11528.019: 9.8958% ( 15) 00:09:32.516 11528.019 - 11580.659: 10.1492% ( 18) 00:09:32.516 11580.659 - 11633.298: 10.3885% ( 17) 00:09:32.516 11633.298 - 11685.937: 10.7264% ( 24) 00:09:32.516 11685.937 - 11738.577: 11.0360% ( 22) 00:09:32.516 11738.577 - 11791.216: 11.3457% ( 22) 00:09:32.516 11791.216 - 11843.855: 11.6836% ( 24) 00:09:32.516 11843.855 - 11896.495: 12.0073% ( 23) 00:09:32.516 11896.495 - 11949.134: 12.3733% ( 26) 00:09:32.516 11949.134 - 12001.773: 12.7252% ( 25) 00:09:32.516 12001.773 - 12054.413: 13.1475% ( 30) 00:09:32.516 12054.413 - 12107.052: 13.5698% ( 30) 00:09:32.516 12107.052 - 12159.692: 13.9499% ( 27) 00:09:32.516 12159.692 - 12212.331: 14.3863% ( 31) 00:09:32.516 12212.331 - 12264.970: 14.7100% ( 23) 00:09:32.516 12264.970 - 12317.610: 15.0479% ( 24) 00:09:32.516 12317.610 - 12370.249: 15.3998% ( 25) 00:09:32.516 12370.249 - 12422.888: 15.7517% ( 25) 00:09:32.516 12422.888 - 12475.528: 16.1036% ( 25) 00:09:32.516 12475.528 - 12528.167: 16.4274% ( 23) 00:09:32.516 12528.167 - 12580.806: 16.7511% ( 23) 00:09:32.516 12580.806 - 12633.446: 17.0608% ( 22) 00:09:32.516 12633.446 - 12686.085: 17.4409% ( 27) 00:09:32.517 12686.085 - 12738.724: 17.8491% ( 29) 00:09:32.517 12738.724 - 12791.364: 18.2432% ( 28) 00:09:32.517 12791.364 - 12844.003: 18.6515% ( 29) 00:09:32.517 12844.003 - 12896.643: 19.0175% ( 26) 00:09:32.517 12896.643 - 12949.282: 19.3834% ( 26) 00:09:32.517 12949.282 - 13001.921: 19.7635% ( 27) 00:09:32.517 13001.921 - 13054.561: 20.0873% ( 23) 00:09:32.517 13054.561 - 13107.200: 20.4251% ( 24) 00:09:32.517 13107.200 - 13159.839: 20.7630% ( 24) 00:09:32.517 13159.839 - 13212.479: 21.0867% ( 23) 00:09:32.517 13212.479 - 13265.118: 21.3964% ( 22) 00:09:32.517 13265.118 - 13317.757: 21.6920% ( 21) 00:09:32.517 13317.757 - 13370.397: 22.0158% ( 23) 00:09:32.517 13370.397 - 13423.036: 22.3395% ( 23) 00:09:32.517 13423.036 - 13475.676: 22.6492% ( 22) 00:09:32.517 13475.676 - 13580.954: 23.2827% ( 45) 00:09:32.517 13580.954 - 13686.233: 24.0428% ( 54) 00:09:32.517 13686.233 - 13791.512: 24.8170% ( 55) 00:09:32.517 13791.512 - 13896.790: 25.5490% ( 52) 00:09:32.517 13896.790 - 14002.069: 26.3373% ( 56) 00:09:32.517 14002.069 - 14107.348: 27.1537% ( 58) 00:09:32.517 14107.348 - 14212.627: 27.9420% ( 56) 00:09:32.517 14212.627 - 14317.905: 28.7584% ( 58) 00:09:32.517 14317.905 - 14423.184: 29.7016% ( 67) 00:09:32.517 14423.184 - 14528.463: 30.6869% ( 70) 00:09:32.517 14528.463 - 14633.741: 31.6864% ( 71) 00:09:32.517 14633.741 - 14739.020: 32.7703% ( 77) 00:09:32.517 14739.020 - 14844.299: 33.8260% ( 75) 00:09:32.517 14844.299 - 14949.578: 34.8255% ( 71) 00:09:32.517 14949.578 - 15054.856: 35.7686% ( 67) 00:09:32.517 15054.856 - 15160.135: 36.6273% ( 61) 00:09:32.517 15160.135 - 15265.414: 37.4015% ( 55) 00:09:32.517 15265.414 - 15370.692: 38.0771% ( 48) 00:09:32.517 15370.692 - 15475.971: 38.8091% ( 52) 00:09:32.517 15475.971 - 15581.250: 39.5833% ( 55) 00:09:32.517 15581.250 - 15686.529: 40.4139% ( 59) 00:09:32.517 15686.529 - 15791.807: 41.1881% ( 55) 00:09:32.517 15791.807 - 15897.086: 41.9341% ( 53) 00:09:32.517 15897.086 - 16002.365: 42.5957% ( 47) 00:09:32.517 16002.365 - 16107.643: 43.0321% ( 31) 00:09:32.517 16107.643 - 16212.922: 43.4544% ( 30) 00:09:32.517 16212.922 - 16318.201: 43.8908% ( 31) 00:09:32.517 16318.201 - 16423.480: 44.3412% ( 32) 00:09:32.517 16423.480 - 16528.758: 44.7917% ( 32) 00:09:32.517 16528.758 - 16634.037: 45.3266% ( 38) 00:09:32.517 16634.037 - 16739.316: 45.9741% ( 46) 00:09:32.517 16739.316 - 16844.594: 46.5090% ( 38) 00:09:32.517 16844.594 - 16949.873: 47.0158% ( 36) 00:09:32.517 16949.873 - 17055.152: 47.5788% ( 40) 00:09:32.517 17055.152 - 17160.431: 48.1137% ( 38) 00:09:32.517 17160.431 - 17265.709: 48.6064% ( 35) 00:09:32.517 17265.709 - 17370.988: 49.0991% ( 35) 00:09:32.517 17370.988 - 17476.267: 49.6762% ( 41) 00:09:32.517 17476.267 - 17581.545: 50.2111% ( 38) 00:09:32.517 17581.545 - 17686.824: 50.7461% ( 38) 00:09:32.517 17686.824 - 17792.103: 51.2950% ( 39) 00:09:32.517 17792.103 - 17897.382: 51.7877% ( 35) 00:09:32.517 17897.382 - 18002.660: 52.4352% ( 46) 00:09:32.517 18002.660 - 18107.939: 53.1391% ( 50) 00:09:32.517 18107.939 - 18213.218: 53.8148% ( 48) 00:09:32.517 18213.218 - 18318.496: 54.5327% ( 51) 00:09:32.517 18318.496 - 18423.775: 55.1380% ( 43) 00:09:32.517 18423.775 - 18529.054: 55.6729% ( 38) 00:09:32.517 18529.054 - 18634.333: 56.1515% ( 34) 00:09:32.517 18634.333 - 18739.611: 56.6019% ( 32) 00:09:32.517 18739.611 - 18844.890: 57.0383% ( 31) 00:09:32.517 18844.890 - 18950.169: 57.5169% ( 34) 00:09:32.517 18950.169 - 19055.447: 58.0377% ( 37) 00:09:32.517 19055.447 - 19160.726: 58.5445% ( 36) 00:09:32.517 19160.726 - 19266.005: 59.0231% ( 34) 00:09:32.517 19266.005 - 19371.284: 59.5158% ( 35) 00:09:32.517 19371.284 - 19476.562: 59.9521% ( 31) 00:09:32.517 19476.562 - 19581.841: 60.4026% ( 32) 00:09:32.517 19581.841 - 19687.120: 60.8812% ( 34) 00:09:32.517 19687.120 - 19792.398: 61.3457% ( 33) 00:09:32.517 19792.398 - 19897.677: 61.8384% ( 35) 00:09:32.517 19897.677 - 20002.956: 62.4437% ( 43) 00:09:32.517 20002.956 - 20108.235: 63.1194% ( 48) 00:09:32.517 20108.235 - 20213.513: 63.8514% ( 52) 00:09:32.517 20213.513 - 20318.792: 64.6678% ( 58) 00:09:32.517 20318.792 - 20424.071: 65.5124% ( 60) 00:09:32.517 20424.071 - 20529.349: 66.3992% ( 63) 00:09:32.517 20529.349 - 20634.628: 67.3001% ( 64) 00:09:32.517 20634.628 - 20739.907: 68.2151% ( 65) 00:09:32.517 20739.907 - 20845.186: 69.2145% ( 71) 00:09:32.517 20845.186 - 20950.464: 70.0873% ( 62) 00:09:32.517 20950.464 - 21055.743: 71.1430% ( 75) 00:09:32.517 21055.743 - 21161.022: 72.2691% ( 80) 00:09:32.517 21161.022 - 21266.300: 73.3108% ( 74) 00:09:32.517 21266.300 - 21371.579: 74.3666% ( 75) 00:09:32.517 21371.579 - 21476.858: 75.3378% ( 69) 00:09:32.517 21476.858 - 21582.137: 76.2810% ( 67) 00:09:32.517 21582.137 - 21687.415: 77.0974% ( 58) 00:09:32.517 21687.415 - 21792.694: 77.9279% ( 59) 00:09:32.517 21792.694 - 21897.973: 78.6458% ( 51) 00:09:32.517 21897.973 - 22003.251: 79.2793% ( 45) 00:09:32.517 22003.251 - 22108.530: 79.8001% ( 37) 00:09:32.517 22108.530 - 22213.809: 80.3209% ( 37) 00:09:32.517 22213.809 - 22319.088: 80.7573% ( 31) 00:09:32.517 22319.088 - 22424.366: 81.2922% ( 38) 00:09:32.517 22424.366 - 22529.645: 81.8834% ( 42) 00:09:32.517 22529.645 - 22634.924: 82.5310% ( 46) 00:09:32.517 22634.924 - 22740.202: 83.0940% ( 40) 00:09:32.517 22740.202 - 22845.481: 83.6008% ( 36) 00:09:32.517 22845.481 - 22950.760: 84.0512% ( 32) 00:09:32.517 22950.760 - 23056.039: 84.4735% ( 30) 00:09:32.517 23056.039 - 23161.317: 84.8818% ( 29) 00:09:32.517 23161.317 - 23266.596: 85.2337% ( 25) 00:09:32.517 23266.596 - 23371.875: 85.6137% ( 27) 00:09:32.517 23371.875 - 23477.153: 86.0220% ( 29) 00:09:32.517 23477.153 - 23582.432: 86.4443% ( 30) 00:09:32.517 23582.432 - 23687.711: 86.8947% ( 32) 00:09:32.517 23687.711 - 23792.990: 87.3170% ( 30) 00:09:32.517 23792.990 - 23898.268: 87.7111% ( 28) 00:09:32.517 23898.268 - 24003.547: 88.0771% ( 26) 00:09:32.517 24003.547 - 24108.826: 88.3868% ( 22) 00:09:32.517 24108.826 - 24214.104: 88.6543% ( 19) 00:09:32.517 24214.104 - 24319.383: 88.9077% ( 18) 00:09:32.517 24319.383 - 24424.662: 89.1892% ( 20) 00:09:32.517 24424.662 - 24529.941: 89.4566% ( 19) 00:09:32.517 24529.941 - 24635.219: 89.7241% ( 19) 00:09:32.517 24635.219 - 24740.498: 89.9493% ( 16) 00:09:32.517 24740.498 - 24845.777: 90.2168% ( 19) 00:09:32.517 24845.777 - 24951.055: 90.5265% ( 22) 00:09:32.517 24951.055 - 25056.334: 90.7658% ( 17) 00:09:32.517 25056.334 - 25161.613: 91.0191% ( 18) 00:09:32.517 25161.613 - 25266.892: 91.2725% ( 18) 00:09:32.517 25266.892 - 25372.170: 91.4977% ( 16) 00:09:32.517 25372.170 - 25477.449: 91.6526% ( 11) 00:09:32.517 25477.449 - 25582.728: 91.8356% ( 13) 00:09:32.517 25582.728 - 25688.006: 92.0327% ( 14) 00:09:32.517 25688.006 - 25793.285: 92.2297% ( 14) 00:09:32.517 25793.285 - 25898.564: 92.3986% ( 12) 00:09:32.517 25898.564 - 26003.843: 92.5957% ( 14) 00:09:32.517 26003.843 - 26109.121: 92.8069% ( 15) 00:09:32.517 26109.121 - 26214.400: 92.9758% ( 12) 00:09:32.517 26214.400 - 26319.679: 93.1869% ( 15) 00:09:32.517 26319.679 - 26424.957: 93.3559% ( 12) 00:09:32.517 26424.957 - 26530.236: 93.5952% ( 17) 00:09:32.517 26530.236 - 26635.515: 93.8063% ( 15) 00:09:32.517 26635.515 - 26740.794: 93.9752% ( 12) 00:09:32.517 26740.794 - 26846.072: 94.1582% ( 13) 00:09:32.517 26846.072 - 26951.351: 94.3553% ( 14) 00:09:32.517 26951.351 - 27161.908: 94.7494% ( 28) 00:09:32.517 27161.908 - 27372.466: 95.0732% ( 23) 00:09:32.517 27372.466 - 27583.023: 95.4110% ( 24) 00:09:32.517 27583.023 - 27793.581: 95.7207% ( 22) 00:09:32.517 27793.581 - 28004.138: 96.0726% ( 25) 00:09:32.517 28004.138 - 28214.696: 96.3964% ( 23) 00:09:32.517 28214.696 - 28425.253: 96.7342% ( 24) 00:09:32.517 28425.253 - 28635.810: 97.0721% ( 24) 00:09:32.517 28635.810 - 28846.368: 97.2973% ( 16) 00:09:32.517 28846.368 - 29056.925: 97.5366% ( 17) 00:09:32.517 29056.925 - 29267.483: 97.7759% ( 17) 00:09:32.517 29267.483 - 29478.040: 97.9167% ( 10) 00:09:32.517 29478.040 - 29688.598: 98.0293% ( 8) 00:09:32.517 29688.598 - 29899.155: 98.1278% ( 7) 00:09:32.517 29899.155 - 30109.712: 98.1841% ( 4) 00:09:32.517 30109.712 - 30320.270: 98.1982% ( 1) 00:09:32.518 36426.435 - 36636.993: 98.2967% ( 7) 00:09:32.518 36636.993 - 36847.550: 98.3530% ( 4) 00:09:32.518 36847.550 - 37058.108: 98.4375% ( 6) 00:09:32.518 37058.108 - 37268.665: 98.5220% ( 6) 00:09:32.518 37268.665 - 37479.222: 98.6064% ( 6) 00:09:32.518 37479.222 - 37689.780: 98.6909% ( 6) 00:09:32.518 37689.780 - 37900.337: 98.7613% ( 5) 00:09:32.518 37900.337 - 38110.895: 98.8457% ( 6) 00:09:32.518 38110.895 - 38321.452: 98.9302% ( 6) 00:09:32.518 38321.452 - 38532.010: 99.0146% ( 6) 00:09:32.518 38532.010 - 38742.567: 99.0991% ( 6) 00:09:32.518 47796.537 - 48007.094: 99.1273% ( 2) 00:09:32.518 48007.094 - 48217.651: 99.1976% ( 5) 00:09:32.518 48217.651 - 48428.209: 99.2539% ( 4) 00:09:32.518 48428.209 - 48638.766: 99.3243% ( 5) 00:09:32.518 48638.766 - 48849.324: 99.3806% ( 4) 00:09:32.518 48849.324 - 49059.881: 99.4510% ( 5) 00:09:32.518 49059.881 - 49270.439: 99.5214% ( 5) 00:09:32.518 49270.439 - 49480.996: 99.5918% ( 5) 00:09:32.518 49480.996 - 49691.553: 99.6622% ( 5) 00:09:32.518 49691.553 - 49902.111: 99.7325% ( 5) 00:09:32.518 49902.111 - 50112.668: 99.8170% ( 6) 00:09:32.518 50112.668 - 50323.226: 99.9015% ( 6) 00:09:32.518 50323.226 - 50533.783: 99.9859% ( 6) 00:09:32.518 50533.783 - 50744.341: 100.0000% ( 1) 00:09:32.518 00:09:32.518 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:32.518 ============================================================================== 00:09:32.518 Range in us Cumulative IO count 00:09:32.518 8790.773 - 8843.412: 0.0422% ( 3) 00:09:32.518 8843.412 - 8896.051: 0.0845% ( 3) 00:09:32.518 8896.051 - 8948.691: 0.1267% ( 3) 00:09:32.518 8948.691 - 9001.330: 0.1548% ( 2) 00:09:32.518 9001.330 - 9053.969: 0.1971% ( 3) 00:09:32.518 9053.969 - 9106.609: 0.2393% ( 3) 00:09:32.518 9106.609 - 9159.248: 0.2815% ( 3) 00:09:32.518 9159.248 - 9211.888: 0.3941% ( 8) 00:09:32.518 9211.888 - 9264.527: 0.5208% ( 9) 00:09:32.518 9264.527 - 9317.166: 0.7038% ( 13) 00:09:32.518 9317.166 - 9369.806: 0.8587% ( 11) 00:09:32.518 9369.806 - 9422.445: 1.0276% ( 12) 00:09:32.518 9422.445 - 9475.084: 1.2387% ( 15) 00:09:32.518 9475.084 - 9527.724: 1.4780% ( 17) 00:09:32.518 9527.724 - 9580.363: 1.6610% ( 13) 00:09:32.518 9580.363 - 9633.002: 1.8581% ( 14) 00:09:32.518 9633.002 - 9685.642: 2.0411% ( 13) 00:09:32.518 9685.642 - 9738.281: 2.2382% ( 14) 00:09:32.518 9738.281 - 9790.920: 2.4352% ( 14) 00:09:32.518 9790.920 - 9843.560: 2.6323% ( 14) 00:09:32.518 9843.560 - 9896.199: 2.8575% ( 16) 00:09:32.518 9896.199 - 9948.839: 3.0968% ( 17) 00:09:32.518 9948.839 - 10001.478: 3.3643% ( 19) 00:09:32.518 10001.478 - 10054.117: 3.5895% ( 16) 00:09:32.518 10054.117 - 10106.757: 3.8429% ( 18) 00:09:32.518 10106.757 - 10159.396: 4.0963% ( 18) 00:09:32.518 10159.396 - 10212.035: 4.3919% ( 21) 00:09:32.518 10212.035 - 10264.675: 4.6734% ( 20) 00:09:32.518 10264.675 - 10317.314: 4.9550% ( 20) 00:09:32.518 10317.314 - 10369.953: 5.1661% ( 15) 00:09:32.518 10369.953 - 10422.593: 5.3632% ( 14) 00:09:32.518 10422.593 - 10475.232: 5.5180% ( 11) 00:09:32.518 10475.232 - 10527.871: 5.6869% ( 12) 00:09:32.518 10527.871 - 10580.511: 5.9262% ( 17) 00:09:32.518 10580.511 - 10633.150: 6.1655% ( 17) 00:09:32.518 10633.150 - 10685.790: 6.3908% ( 16) 00:09:32.518 10685.790 - 10738.429: 6.6019% ( 15) 00:09:32.518 10738.429 - 10791.068: 6.8271% ( 16) 00:09:32.518 10791.068 - 10843.708: 7.0664% ( 17) 00:09:32.518 10843.708 - 10896.347: 7.3339% ( 19) 00:09:32.518 10896.347 - 10948.986: 7.5732% ( 17) 00:09:32.518 10948.986 - 11001.626: 7.7984% ( 16) 00:09:32.518 11001.626 - 11054.265: 7.9814% ( 13) 00:09:32.518 11054.265 - 11106.904: 8.1785% ( 14) 00:09:32.518 11106.904 - 11159.544: 8.3052% ( 9) 00:09:32.518 11159.544 - 11212.183: 8.3896% ( 6) 00:09:32.518 11212.183 - 11264.822: 8.4600% ( 5) 00:09:32.518 11264.822 - 11317.462: 8.6008% ( 10) 00:09:32.518 11317.462 - 11370.101: 8.8260% ( 16) 00:09:32.518 11370.101 - 11422.741: 9.0231% ( 14) 00:09:32.518 11422.741 - 11475.380: 9.2483% ( 16) 00:09:32.518 11475.380 - 11528.019: 9.5298% ( 20) 00:09:32.518 11528.019 - 11580.659: 9.8536% ( 23) 00:09:32.518 11580.659 - 11633.298: 10.1774% ( 23) 00:09:32.518 11633.298 - 11685.937: 10.5434% ( 26) 00:09:32.518 11685.937 - 11738.577: 10.8953% ( 25) 00:09:32.518 11738.577 - 11791.216: 11.2894% ( 28) 00:09:32.518 11791.216 - 11843.855: 11.6413% ( 25) 00:09:32.518 11843.855 - 11896.495: 12.0073% ( 26) 00:09:32.518 11896.495 - 11949.134: 12.4015% ( 28) 00:09:32.518 11949.134 - 12001.773: 12.7675% ( 26) 00:09:32.518 12001.773 - 12054.413: 13.1194% ( 25) 00:09:32.518 12054.413 - 12107.052: 13.4994% ( 27) 00:09:32.518 12107.052 - 12159.692: 13.8936% ( 28) 00:09:32.518 12159.692 - 12212.331: 14.3440% ( 32) 00:09:32.518 12212.331 - 12264.970: 14.7241% ( 27) 00:09:32.518 12264.970 - 12317.610: 15.1605% ( 31) 00:09:32.518 12317.610 - 12370.249: 15.5968% ( 31) 00:09:32.518 12370.249 - 12422.888: 16.0051% ( 29) 00:09:32.518 12422.888 - 12475.528: 16.4274% ( 30) 00:09:32.518 12475.528 - 12528.167: 16.9200% ( 35) 00:09:32.518 12528.167 - 12580.806: 17.3564% ( 31) 00:09:32.518 12580.806 - 12633.446: 17.8209% ( 33) 00:09:32.518 12633.446 - 12686.085: 18.2432% ( 30) 00:09:32.518 12686.085 - 12738.724: 18.6515% ( 29) 00:09:32.518 12738.724 - 12791.364: 19.0456% ( 28) 00:09:32.518 12791.364 - 12844.003: 19.3834% ( 24) 00:09:32.518 12844.003 - 12896.643: 19.7494% ( 26) 00:09:32.518 12896.643 - 12949.282: 20.1295% ( 27) 00:09:32.518 12949.282 - 13001.921: 20.5236% ( 28) 00:09:32.518 13001.921 - 13054.561: 20.8896% ( 26) 00:09:32.518 13054.561 - 13107.200: 21.2556% ( 26) 00:09:32.518 13107.200 - 13159.839: 21.5653% ( 22) 00:09:32.518 13159.839 - 13212.479: 21.8328% ( 19) 00:09:32.518 13212.479 - 13265.118: 22.1002% ( 19) 00:09:32.518 13265.118 - 13317.757: 22.3818% ( 20) 00:09:32.518 13317.757 - 13370.397: 22.6492% ( 19) 00:09:32.518 13370.397 - 13423.036: 22.8744% ( 16) 00:09:32.518 13423.036 - 13475.676: 23.1841% ( 22) 00:09:32.518 13475.676 - 13580.954: 23.6205% ( 31) 00:09:32.518 13580.954 - 13686.233: 24.0991% ( 34) 00:09:32.518 13686.233 - 13791.512: 24.5073% ( 29) 00:09:32.518 13791.512 - 13896.790: 25.1126% ( 43) 00:09:32.518 13896.790 - 14002.069: 25.7320% ( 44) 00:09:32.518 14002.069 - 14107.348: 26.4780% ( 53) 00:09:32.518 14107.348 - 14212.627: 27.2945% ( 58) 00:09:32.518 14212.627 - 14317.905: 28.1672% ( 62) 00:09:32.518 14317.905 - 14423.184: 29.1385% ( 69) 00:09:32.518 14423.184 - 14528.463: 30.0535% ( 65) 00:09:32.518 14528.463 - 14633.741: 30.9825% ( 66) 00:09:32.518 14633.741 - 14739.020: 31.9257% ( 67) 00:09:32.518 14739.020 - 14844.299: 32.8970% ( 69) 00:09:32.518 14844.299 - 14949.578: 33.8260% ( 66) 00:09:32.518 14949.578 - 15054.856: 34.7973% ( 69) 00:09:32.518 15054.856 - 15160.135: 35.8812% ( 77) 00:09:32.518 15160.135 - 15265.414: 36.8806% ( 71) 00:09:32.518 15265.414 - 15370.692: 37.9223% ( 74) 00:09:32.518 15370.692 - 15475.971: 38.9217% ( 71) 00:09:32.518 15475.971 - 15581.250: 39.8086% ( 63) 00:09:32.518 15581.250 - 15686.529: 40.6532% ( 60) 00:09:32.518 15686.529 - 15791.807: 41.4696% ( 58) 00:09:32.518 15791.807 - 15897.086: 42.1875% ( 51) 00:09:32.518 15897.086 - 16002.365: 42.8350% ( 46) 00:09:32.518 16002.365 - 16107.643: 43.4825% ( 46) 00:09:32.518 16107.643 - 16212.922: 44.0738% ( 42) 00:09:32.518 16212.922 - 16318.201: 44.6227% ( 39) 00:09:32.518 16318.201 - 16423.480: 45.2562% ( 45) 00:09:32.518 16423.480 - 16528.758: 45.7066% ( 32) 00:09:32.518 16528.758 - 16634.037: 46.1289% ( 30) 00:09:32.518 16634.037 - 16739.316: 46.5231% ( 28) 00:09:32.518 16739.316 - 16844.594: 47.0158% ( 35) 00:09:32.518 16844.594 - 16949.873: 47.5084% ( 35) 00:09:32.519 16949.873 - 17055.152: 47.9448% ( 31) 00:09:32.519 17055.152 - 17160.431: 48.3812% ( 31) 00:09:32.519 17160.431 - 17265.709: 48.8739% ( 35) 00:09:32.519 17265.709 - 17370.988: 49.3806% ( 36) 00:09:32.519 17370.988 - 17476.267: 49.8170% ( 31) 00:09:32.519 17476.267 - 17581.545: 50.2534% ( 31) 00:09:32.519 17581.545 - 17686.824: 50.7320% ( 34) 00:09:32.519 17686.824 - 17792.103: 51.3514% ( 44) 00:09:32.519 17792.103 - 17897.382: 51.9848% ( 45) 00:09:32.519 17897.382 - 18002.660: 52.5479% ( 40) 00:09:32.519 18002.660 - 18107.939: 53.1532% ( 43) 00:09:32.519 18107.939 - 18213.218: 53.6599% ( 36) 00:09:32.519 18213.218 - 18318.496: 54.2089% ( 39) 00:09:32.519 18318.496 - 18423.775: 54.7016% ( 35) 00:09:32.519 18423.775 - 18529.054: 55.2646% ( 40) 00:09:32.519 18529.054 - 18634.333: 55.7573% ( 35) 00:09:32.519 18634.333 - 18739.611: 56.3345% ( 41) 00:09:32.519 18739.611 - 18844.890: 56.8694% ( 38) 00:09:32.519 18844.890 - 18950.169: 57.4184% ( 39) 00:09:32.519 18950.169 - 19055.447: 58.0659% ( 46) 00:09:32.519 19055.447 - 19160.726: 58.6430% ( 41) 00:09:32.519 19160.726 - 19266.005: 59.1920% ( 39) 00:09:32.519 19266.005 - 19371.284: 59.6425% ( 32) 00:09:32.519 19371.284 - 19476.562: 60.1774% ( 38) 00:09:32.519 19476.562 - 19581.841: 60.7545% ( 41) 00:09:32.519 19581.841 - 19687.120: 61.2472% ( 35) 00:09:32.519 19687.120 - 19792.398: 61.7399% ( 35) 00:09:32.519 19792.398 - 19897.677: 62.2748% ( 38) 00:09:32.519 19897.677 - 20002.956: 62.8801% ( 43) 00:09:32.519 20002.956 - 20108.235: 63.4713% ( 42) 00:09:32.519 20108.235 - 20213.513: 64.2033% ( 52) 00:09:32.519 20213.513 - 20318.792: 64.9071% ( 50) 00:09:32.519 20318.792 - 20424.071: 65.5968% ( 49) 00:09:32.519 20424.071 - 20529.349: 66.3711% ( 55) 00:09:32.519 20529.349 - 20634.628: 67.0749% ( 50) 00:09:32.519 20634.628 - 20739.907: 67.9054% ( 59) 00:09:32.519 20739.907 - 20845.186: 68.7782% ( 62) 00:09:32.519 20845.186 - 20950.464: 69.7213% ( 67) 00:09:32.519 20950.464 - 21055.743: 70.5940% ( 62) 00:09:32.519 21055.743 - 21161.022: 71.4668% ( 62) 00:09:32.519 21161.022 - 21266.300: 72.2832% ( 58) 00:09:32.519 21266.300 - 21371.579: 73.0997% ( 58) 00:09:32.519 21371.579 - 21476.858: 73.8880% ( 56) 00:09:32.519 21476.858 - 21582.137: 74.7044% ( 58) 00:09:32.519 21582.137 - 21687.415: 75.6053% ( 64) 00:09:32.519 21687.415 - 21792.694: 76.4217% ( 58) 00:09:32.519 21792.694 - 21897.973: 77.2100% ( 56) 00:09:32.519 21897.973 - 22003.251: 78.0546% ( 60) 00:09:32.519 22003.251 - 22108.530: 78.8711% ( 58) 00:09:32.519 22108.530 - 22213.809: 79.6453% ( 55) 00:09:32.519 22213.809 - 22319.088: 80.3773% ( 52) 00:09:32.519 22319.088 - 22424.366: 81.1796% ( 57) 00:09:32.519 22424.366 - 22529.645: 82.0101% ( 59) 00:09:32.519 22529.645 - 22634.924: 82.8125% ( 57) 00:09:32.519 22634.924 - 22740.202: 83.4459% ( 45) 00:09:32.519 22740.202 - 22845.481: 84.0512% ( 43) 00:09:32.519 22845.481 - 22950.760: 84.6284% ( 41) 00:09:32.519 22950.760 - 23056.039: 85.1633% ( 38) 00:09:32.519 23056.039 - 23161.317: 85.6137% ( 32) 00:09:32.519 23161.317 - 23266.596: 85.9938% ( 27) 00:09:32.519 23266.596 - 23371.875: 86.3316% ( 24) 00:09:32.519 23371.875 - 23477.153: 86.6413% ( 22) 00:09:32.519 23477.153 - 23582.432: 86.9651% ( 23) 00:09:32.519 23582.432 - 23687.711: 87.3029% ( 24) 00:09:32.519 23687.711 - 23792.990: 87.6267% ( 23) 00:09:32.519 23792.990 - 23898.268: 87.9927% ( 26) 00:09:32.519 23898.268 - 24003.547: 88.2742% ( 20) 00:09:32.519 24003.547 - 24108.826: 88.5276% ( 18) 00:09:32.519 24108.826 - 24214.104: 88.8232% ( 21) 00:09:32.519 24214.104 - 24319.383: 89.0907% ( 19) 00:09:32.519 24319.383 - 24424.662: 89.3159% ( 16) 00:09:32.519 24424.662 - 24529.941: 89.4989% ( 13) 00:09:32.519 24529.941 - 24635.219: 89.6819% ( 13) 00:09:32.519 24635.219 - 24740.498: 89.8649% ( 13) 00:09:32.519 24740.498 - 24845.777: 90.0619% ( 14) 00:09:32.519 24845.777 - 24951.055: 90.2590% ( 14) 00:09:32.519 24951.055 - 25056.334: 90.4702% ( 15) 00:09:32.519 25056.334 - 25161.613: 90.7095% ( 17) 00:09:32.519 25161.613 - 25266.892: 90.9628% ( 18) 00:09:32.519 25266.892 - 25372.170: 91.2021% ( 17) 00:09:32.519 25372.170 - 25477.449: 91.3711% ( 12) 00:09:32.519 25477.449 - 25582.728: 91.6244% ( 18) 00:09:32.519 25582.728 - 25688.006: 91.8356% ( 15) 00:09:32.519 25688.006 - 25793.285: 92.0608% ( 16) 00:09:32.519 25793.285 - 25898.564: 92.3001% ( 17) 00:09:32.519 25898.564 - 26003.843: 92.5253% ( 16) 00:09:32.519 26003.843 - 26109.121: 92.7083% ( 13) 00:09:32.519 26109.121 - 26214.400: 92.9195% ( 15) 00:09:32.519 26214.400 - 26319.679: 93.1025% ( 13) 00:09:32.519 26319.679 - 26424.957: 93.3136% ( 15) 00:09:32.519 26424.957 - 26530.236: 93.5107% ( 14) 00:09:32.519 26530.236 - 26635.515: 93.7078% ( 14) 00:09:32.519 26635.515 - 26740.794: 93.9048% ( 14) 00:09:32.519 26740.794 - 26846.072: 94.0597% ( 11) 00:09:32.519 26846.072 - 26951.351: 94.2286% ( 12) 00:09:32.519 26951.351 - 27161.908: 94.5242% ( 21) 00:09:32.519 27161.908 - 27372.466: 94.9324% ( 29) 00:09:32.519 27372.466 - 27583.023: 95.3125% ( 27) 00:09:32.519 27583.023 - 27793.581: 95.7066% ( 28) 00:09:32.519 27793.581 - 28004.138: 96.0445% ( 24) 00:09:32.519 28004.138 - 28214.696: 96.3964% ( 25) 00:09:32.519 28214.696 - 28425.253: 96.7061% ( 22) 00:09:32.519 28425.253 - 28635.810: 97.0298% ( 23) 00:09:32.519 28635.810 - 28846.368: 97.3255% ( 21) 00:09:32.519 28846.368 - 29056.925: 97.5929% ( 19) 00:09:32.519 29056.925 - 29267.483: 97.8322% ( 17) 00:09:32.519 29267.483 - 29478.040: 97.9870% ( 11) 00:09:32.519 29478.040 - 29688.598: 98.0715% ( 6) 00:09:32.519 29688.598 - 29899.155: 98.1137% ( 3) 00:09:32.519 29899.155 - 30109.712: 98.1560% ( 3) 00:09:32.519 30109.712 - 30320.270: 98.1982% ( 3) 00:09:32.519 34531.418 - 34741.976: 98.2123% ( 1) 00:09:32.519 34741.976 - 34952.533: 98.2967% ( 6) 00:09:32.519 34952.533 - 35163.091: 98.3812% ( 6) 00:09:32.519 35163.091 - 35373.648: 98.4657% ( 6) 00:09:32.519 35373.648 - 35584.206: 98.5642% ( 7) 00:09:32.519 35584.206 - 35794.763: 98.6486% ( 6) 00:09:32.519 35794.763 - 36005.320: 98.7331% ( 6) 00:09:32.519 36005.320 - 36215.878: 98.8176% ( 6) 00:09:32.519 36215.878 - 36426.435: 98.9020% ( 6) 00:09:32.519 36426.435 - 36636.993: 98.9865% ( 6) 00:09:32.519 36636.993 - 36847.550: 99.0569% ( 5) 00:09:32.519 36847.550 - 37058.108: 99.0991% ( 3) 00:09:32.519 45269.847 - 45480.405: 99.1132% ( 1) 00:09:32.519 45480.405 - 45690.962: 99.1836% ( 5) 00:09:32.519 45690.962 - 45901.520: 99.2539% ( 5) 00:09:32.519 45901.520 - 46112.077: 99.3243% ( 5) 00:09:32.519 46112.077 - 46322.635: 99.3947% ( 5) 00:09:32.519 46322.635 - 46533.192: 99.4792% ( 6) 00:09:32.519 46533.192 - 46743.749: 99.5073% ( 2) 00:09:32.519 46743.749 - 46954.307: 99.5636% ( 4) 00:09:32.519 46954.307 - 47164.864: 99.6481% ( 6) 00:09:32.519 47164.864 - 47375.422: 99.7044% ( 4) 00:09:32.519 47375.422 - 47585.979: 99.7889% ( 6) 00:09:32.519 47585.979 - 47796.537: 99.8592% ( 5) 00:09:32.519 47796.537 - 48007.094: 99.9296% ( 5) 00:09:32.519 48007.094 - 48217.651: 100.0000% ( 5) 00:09:32.519 00:09:32.519 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:32.519 ============================================================================== 00:09:32.519 Range in us Cumulative IO count 00:09:32.519 8790.773 - 8843.412: 0.0282% ( 2) 00:09:32.519 8843.412 - 8896.051: 0.1126% ( 6) 00:09:32.519 8896.051 - 8948.691: 0.1830% ( 5) 00:09:32.519 8948.691 - 9001.330: 0.2675% ( 6) 00:09:32.519 9001.330 - 9053.969: 0.3378% ( 5) 00:09:32.519 9053.969 - 9106.609: 0.4223% ( 6) 00:09:32.519 9106.609 - 9159.248: 0.5490% ( 9) 00:09:32.519 9159.248 - 9211.888: 0.6757% ( 9) 00:09:32.519 9211.888 - 9264.527: 0.8024% ( 9) 00:09:32.519 9264.527 - 9317.166: 0.9431% ( 10) 00:09:32.519 9317.166 - 9369.806: 1.0698% ( 9) 00:09:32.520 9369.806 - 9422.445: 1.1965% ( 9) 00:09:32.520 9422.445 - 9475.084: 1.3654% ( 12) 00:09:32.520 9475.084 - 9527.724: 1.5484% ( 13) 00:09:32.520 9527.724 - 9580.363: 1.7314% ( 13) 00:09:32.520 9580.363 - 9633.002: 1.9285% ( 14) 00:09:32.520 9633.002 - 9685.642: 2.1537% ( 16) 00:09:32.520 9685.642 - 9738.281: 2.3508% ( 14) 00:09:32.520 9738.281 - 9790.920: 2.5619% ( 15) 00:09:32.520 9790.920 - 9843.560: 2.7731% ( 15) 00:09:32.520 9843.560 - 9896.199: 2.9842% ( 15) 00:09:32.520 9896.199 - 9948.839: 3.1813% ( 14) 00:09:32.520 9948.839 - 10001.478: 3.3925% ( 15) 00:09:32.520 10001.478 - 10054.117: 3.5895% ( 14) 00:09:32.520 10054.117 - 10106.757: 3.8148% ( 16) 00:09:32.520 10106.757 - 10159.396: 4.0963% ( 20) 00:09:32.520 10159.396 - 10212.035: 4.3637% ( 19) 00:09:32.520 10212.035 - 10264.675: 4.6171% ( 18) 00:09:32.520 10264.675 - 10317.314: 4.9127% ( 21) 00:09:32.520 10317.314 - 10369.953: 5.1802% ( 19) 00:09:32.520 10369.953 - 10422.593: 5.4758% ( 21) 00:09:32.520 10422.593 - 10475.232: 5.7432% ( 19) 00:09:32.520 10475.232 - 10527.871: 5.9966% ( 18) 00:09:32.520 10527.871 - 10580.511: 6.2218% ( 16) 00:09:32.520 10580.511 - 10633.150: 6.4471% ( 16) 00:09:32.520 10633.150 - 10685.790: 6.6864% ( 17) 00:09:32.520 10685.790 - 10738.429: 6.9116% ( 16) 00:09:32.520 10738.429 - 10791.068: 7.1368% ( 16) 00:09:32.520 10791.068 - 10843.708: 7.3339% ( 14) 00:09:32.520 10843.708 - 10896.347: 7.5873% ( 18) 00:09:32.520 10896.347 - 10948.986: 7.7984% ( 15) 00:09:32.520 10948.986 - 11001.626: 7.9814% ( 13) 00:09:32.520 11001.626 - 11054.265: 8.1926% ( 15) 00:09:32.520 11054.265 - 11106.904: 8.3615% ( 12) 00:09:32.520 11106.904 - 11159.544: 8.5586% ( 14) 00:09:32.520 11159.544 - 11212.183: 8.7275% ( 12) 00:09:32.520 11212.183 - 11264.822: 8.8401% ( 8) 00:09:32.520 11264.822 - 11317.462: 8.9809% ( 10) 00:09:32.520 11317.462 - 11370.101: 9.1498% ( 12) 00:09:32.520 11370.101 - 11422.741: 9.3187% ( 12) 00:09:32.520 11422.741 - 11475.380: 9.5580% ( 17) 00:09:32.520 11475.380 - 11528.019: 9.9099% ( 25) 00:09:32.520 11528.019 - 11580.659: 10.2055% ( 21) 00:09:32.520 11580.659 - 11633.298: 10.5011% ( 21) 00:09:32.520 11633.298 - 11685.937: 10.8108% ( 22) 00:09:32.520 11685.937 - 11738.577: 11.1768% ( 26) 00:09:32.520 11738.577 - 11791.216: 11.5146% ( 24) 00:09:32.520 11791.216 - 11843.855: 11.9229% ( 29) 00:09:32.520 11843.855 - 11896.495: 12.3452% ( 30) 00:09:32.520 11896.495 - 11949.134: 12.7252% ( 27) 00:09:32.520 11949.134 - 12001.773: 13.1194% ( 28) 00:09:32.520 12001.773 - 12054.413: 13.4994% ( 27) 00:09:32.520 12054.413 - 12107.052: 13.8936% ( 28) 00:09:32.520 12107.052 - 12159.692: 14.3440% ( 32) 00:09:32.520 12159.692 - 12212.331: 14.7523% ( 29) 00:09:32.520 12212.331 - 12264.970: 15.2027% ( 32) 00:09:32.520 12264.970 - 12317.610: 15.6391% ( 31) 00:09:32.520 12317.610 - 12370.249: 16.0473% ( 29) 00:09:32.520 12370.249 - 12422.888: 16.4837% ( 31) 00:09:32.520 12422.888 - 12475.528: 16.9060% ( 30) 00:09:32.520 12475.528 - 12528.167: 17.3283% ( 30) 00:09:32.520 12528.167 - 12580.806: 17.7787% ( 32) 00:09:32.520 12580.806 - 12633.446: 18.2573% ( 34) 00:09:32.520 12633.446 - 12686.085: 18.6515% ( 28) 00:09:32.520 12686.085 - 12738.724: 19.0456% ( 28) 00:09:32.520 12738.724 - 12791.364: 19.4116% ( 26) 00:09:32.520 12791.364 - 12844.003: 19.7776% ( 26) 00:09:32.520 12844.003 - 12896.643: 20.1154% ( 24) 00:09:32.520 12896.643 - 12949.282: 20.4814% ( 26) 00:09:32.520 12949.282 - 13001.921: 20.8052% ( 23) 00:09:32.520 13001.921 - 13054.561: 21.1149% ( 22) 00:09:32.520 13054.561 - 13107.200: 21.4668% ( 25) 00:09:32.520 13107.200 - 13159.839: 21.8046% ( 24) 00:09:32.520 13159.839 - 13212.479: 22.1002% ( 21) 00:09:32.520 13212.479 - 13265.118: 22.3958% ( 21) 00:09:32.520 13265.118 - 13317.757: 22.6211% ( 16) 00:09:32.520 13317.757 - 13370.397: 22.8885% ( 19) 00:09:32.520 13370.397 - 13423.036: 23.1419% ( 18) 00:09:32.520 13423.036 - 13475.676: 23.4093% ( 19) 00:09:32.520 13475.676 - 13580.954: 23.9302% ( 37) 00:09:32.520 13580.954 - 13686.233: 24.3666% ( 31) 00:09:32.520 13686.233 - 13791.512: 24.8311% ( 33) 00:09:32.520 13791.512 - 13896.790: 25.4505% ( 44) 00:09:32.520 13896.790 - 14002.069: 26.1684% ( 51) 00:09:32.520 14002.069 - 14107.348: 27.0552% ( 63) 00:09:32.520 14107.348 - 14212.627: 27.8153% ( 54) 00:09:32.520 14212.627 - 14317.905: 28.6318% ( 58) 00:09:32.520 14317.905 - 14423.184: 29.4904% ( 61) 00:09:32.520 14423.184 - 14528.463: 30.4476% ( 68) 00:09:32.520 14528.463 - 14633.741: 31.3485% ( 64) 00:09:32.520 14633.741 - 14739.020: 32.1509% ( 57) 00:09:32.520 14739.020 - 14844.299: 33.1222% ( 69) 00:09:32.520 14844.299 - 14949.578: 34.0372% ( 65) 00:09:32.520 14949.578 - 15054.856: 34.9803% ( 67) 00:09:32.520 15054.856 - 15160.135: 35.8671% ( 63) 00:09:32.520 15160.135 - 15265.414: 36.7539% ( 63) 00:09:32.520 15265.414 - 15370.692: 37.5985% ( 60) 00:09:32.520 15370.692 - 15475.971: 38.4009% ( 57) 00:09:32.520 15475.971 - 15581.250: 39.1892% ( 56) 00:09:32.520 15581.250 - 15686.529: 39.8649% ( 48) 00:09:32.520 15686.529 - 15791.807: 40.5687% ( 50) 00:09:32.520 15791.807 - 15897.086: 41.2725% ( 50) 00:09:32.520 15897.086 - 16002.365: 41.9623% ( 49) 00:09:32.520 16002.365 - 16107.643: 42.6802% ( 51) 00:09:32.520 16107.643 - 16212.922: 43.2573% ( 41) 00:09:32.520 16212.922 - 16318.201: 43.9752% ( 51) 00:09:32.520 16318.201 - 16423.480: 44.5805% ( 43) 00:09:32.520 16423.480 - 16528.758: 45.1999% ( 44) 00:09:32.520 16528.758 - 16634.037: 45.7770% ( 41) 00:09:32.520 16634.037 - 16739.316: 46.3260% ( 39) 00:09:32.520 16739.316 - 16844.594: 46.8891% ( 40) 00:09:32.520 16844.594 - 16949.873: 47.4240% ( 38) 00:09:32.520 16949.873 - 17055.152: 47.9448% ( 37) 00:09:32.520 17055.152 - 17160.431: 48.4657% ( 37) 00:09:32.520 17160.431 - 17265.709: 49.0006% ( 38) 00:09:32.520 17265.709 - 17370.988: 49.5777% ( 41) 00:09:32.520 17370.988 - 17476.267: 50.1267% ( 39) 00:09:32.520 17476.267 - 17581.545: 50.6616% ( 38) 00:09:32.520 17581.545 - 17686.824: 51.1261% ( 33) 00:09:32.520 17686.824 - 17792.103: 51.6610% ( 38) 00:09:32.520 17792.103 - 17897.382: 52.2100% ( 39) 00:09:32.520 17897.382 - 18002.660: 52.8012% ( 42) 00:09:32.520 18002.660 - 18107.939: 53.2517% ( 32) 00:09:32.520 18107.939 - 18213.218: 53.6177% ( 26) 00:09:32.520 18213.218 - 18318.496: 54.0681% ( 32) 00:09:32.520 18318.496 - 18423.775: 54.5608% ( 35) 00:09:32.520 18423.775 - 18529.054: 55.0957% ( 38) 00:09:32.520 18529.054 - 18634.333: 55.6729% ( 41) 00:09:32.520 18634.333 - 18739.611: 56.2782% ( 43) 00:09:32.520 18739.611 - 18844.890: 56.8694% ( 42) 00:09:32.520 18844.890 - 18950.169: 57.4747% ( 43) 00:09:32.520 18950.169 - 19055.447: 58.0377% ( 40) 00:09:32.520 19055.447 - 19160.726: 58.5726% ( 38) 00:09:32.520 19160.726 - 19266.005: 59.1920% ( 44) 00:09:32.520 19266.005 - 19371.284: 59.8255% ( 45) 00:09:32.520 19371.284 - 19476.562: 60.5011% ( 48) 00:09:32.520 19476.562 - 19581.841: 61.1205% ( 44) 00:09:32.520 19581.841 - 19687.120: 61.6836% ( 40) 00:09:32.520 19687.120 - 19792.398: 62.3733% ( 49) 00:09:32.520 19792.398 - 19897.677: 63.0490% ( 48) 00:09:32.520 19897.677 - 20002.956: 63.7950% ( 53) 00:09:32.520 20002.956 - 20108.235: 64.5411% ( 53) 00:09:32.520 20108.235 - 20213.513: 65.2309% ( 49) 00:09:32.520 20213.513 - 20318.792: 65.9488% ( 51) 00:09:32.520 20318.792 - 20424.071: 66.6385% ( 49) 00:09:32.520 20424.071 - 20529.349: 67.3846% ( 53) 00:09:32.520 20529.349 - 20634.628: 68.1447% ( 54) 00:09:32.520 20634.628 - 20739.907: 68.8063% ( 47) 00:09:32.520 20739.907 - 20845.186: 69.5242% ( 51) 00:09:32.520 20845.186 - 20950.464: 70.2280% ( 50) 00:09:32.520 20950.464 - 21055.743: 70.8756% ( 46) 00:09:32.520 21055.743 - 21161.022: 71.5090% ( 45) 00:09:32.520 21161.022 - 21266.300: 72.1143% ( 43) 00:09:32.520 21266.300 - 21371.579: 72.7477% ( 45) 00:09:32.520 21371.579 - 21476.858: 73.4797% ( 52) 00:09:32.520 21476.858 - 21582.137: 74.3102% ( 59) 00:09:32.520 21582.137 - 21687.415: 75.1971% ( 63) 00:09:32.520 21687.415 - 21792.694: 76.0557% ( 61) 00:09:32.520 21792.694 - 21897.973: 76.8722% ( 58) 00:09:32.520 21897.973 - 22003.251: 77.6042% ( 52) 00:09:32.520 22003.251 - 22108.530: 78.3643% ( 54) 00:09:32.520 22108.530 - 22213.809: 79.0681% ( 50) 00:09:32.520 22213.809 - 22319.088: 79.8001% ( 52) 00:09:32.520 22319.088 - 22424.366: 80.3632% ( 40) 00:09:32.520 22424.366 - 22529.645: 80.9403% ( 41) 00:09:32.520 22529.645 - 22634.924: 81.5175% ( 41) 00:09:32.520 22634.924 - 22740.202: 82.2072% ( 49) 00:09:32.520 22740.202 - 22845.481: 82.7703% ( 40) 00:09:32.520 22845.481 - 22950.760: 83.4319% ( 47) 00:09:32.520 22950.760 - 23056.039: 84.0935% ( 47) 00:09:32.520 23056.039 - 23161.317: 84.6988% ( 43) 00:09:32.520 23161.317 - 23266.596: 85.2759% ( 41) 00:09:32.520 23266.596 - 23371.875: 85.7827% ( 36) 00:09:32.520 23371.875 - 23477.153: 86.1627% ( 27) 00:09:32.520 23477.153 - 23582.432: 86.5569% ( 28) 00:09:32.520 23582.432 - 23687.711: 86.8806% ( 23) 00:09:32.520 23687.711 - 23792.990: 87.2044% ( 23) 00:09:32.520 23792.990 - 23898.268: 87.4859% ( 20) 00:09:32.520 23898.268 - 24003.547: 87.7815% ( 21) 00:09:32.520 24003.547 - 24108.826: 88.0068% ( 16) 00:09:32.520 24108.826 - 24214.104: 88.2320% ( 16) 00:09:32.520 24214.104 - 24319.383: 88.4431% ( 15) 00:09:32.520 24319.383 - 24424.662: 88.6261% ( 13) 00:09:32.520 24424.662 - 24529.941: 88.8091% ( 13) 00:09:32.520 24529.941 - 24635.219: 88.9921% ( 13) 00:09:32.520 24635.219 - 24740.498: 89.1470% ( 11) 00:09:32.520 24740.498 - 24845.777: 89.3581% ( 15) 00:09:32.520 24845.777 - 24951.055: 89.5974% ( 17) 00:09:32.520 24951.055 - 25056.334: 89.8367% ( 17) 00:09:32.521 25056.334 - 25161.613: 90.0901% ( 18) 00:09:32.521 25161.613 - 25266.892: 90.3857% ( 21) 00:09:32.521 25266.892 - 25372.170: 90.6391% ( 18) 00:09:32.521 25372.170 - 25477.449: 90.9065% ( 19) 00:09:32.521 25477.449 - 25582.728: 91.2584% ( 25) 00:09:32.521 25582.728 - 25688.006: 91.5118% ( 18) 00:09:32.521 25688.006 - 25793.285: 91.7793% ( 19) 00:09:32.521 25793.285 - 25898.564: 92.0608% ( 20) 00:09:32.521 25898.564 - 26003.843: 92.3423% ( 20) 00:09:32.521 26003.843 - 26109.121: 92.5816% ( 17) 00:09:32.521 26109.121 - 26214.400: 92.7928% ( 15) 00:09:32.521 26214.400 - 26319.679: 92.9899% ( 14) 00:09:32.521 26319.679 - 26424.957: 93.2010% ( 15) 00:09:32.521 26424.957 - 26530.236: 93.3840% ( 13) 00:09:32.521 26530.236 - 26635.515: 93.5811% ( 14) 00:09:32.521 26635.515 - 26740.794: 93.7782% ( 14) 00:09:32.521 26740.794 - 26846.072: 93.9893% ( 15) 00:09:32.521 26846.072 - 26951.351: 94.1723% ( 13) 00:09:32.521 26951.351 - 27161.908: 94.5383% ( 26) 00:09:32.521 27161.908 - 27372.466: 94.9043% ( 26) 00:09:32.521 27372.466 - 27583.023: 95.3266% ( 30) 00:09:32.521 27583.023 - 27793.581: 95.7066% ( 27) 00:09:32.521 27793.581 - 28004.138: 96.0867% ( 27) 00:09:32.521 28004.138 - 28214.696: 96.4386% ( 25) 00:09:32.521 28214.696 - 28425.253: 96.7624% ( 23) 00:09:32.521 28425.253 - 28635.810: 97.1002% ( 24) 00:09:32.521 28635.810 - 28846.368: 97.4381% ( 24) 00:09:32.521 28846.368 - 29056.925: 97.6914% ( 18) 00:09:32.521 29056.925 - 29267.483: 97.8744% ( 13) 00:09:32.521 29267.483 - 29478.040: 97.9730% ( 7) 00:09:32.521 29478.040 - 29688.598: 98.0574% ( 6) 00:09:32.521 29688.598 - 29899.155: 98.0997% ( 3) 00:09:32.521 29899.155 - 30109.712: 98.1419% ( 3) 00:09:32.521 30109.712 - 30320.270: 98.1982% ( 4) 00:09:32.521 32425.844 - 32636.402: 98.2264% ( 2) 00:09:32.521 32636.402 - 32846.959: 98.3249% ( 7) 00:09:32.521 32846.959 - 33057.516: 98.3953% ( 5) 00:09:32.521 33057.516 - 33268.074: 98.4657% ( 5) 00:09:32.521 33268.074 - 33478.631: 98.5501% ( 6) 00:09:32.521 33478.631 - 33689.189: 98.6346% ( 6) 00:09:32.521 33689.189 - 33899.746: 98.6909% ( 4) 00:09:32.521 33899.746 - 34110.304: 98.7613% ( 5) 00:09:32.521 34110.304 - 34320.861: 98.8457% ( 6) 00:09:32.521 34320.861 - 34531.418: 98.9302% ( 6) 00:09:32.521 34531.418 - 34741.976: 99.0146% ( 6) 00:09:32.521 34741.976 - 34952.533: 99.0850% ( 5) 00:09:32.521 34952.533 - 35163.091: 99.0991% ( 1) 00:09:32.521 42953.716 - 43164.273: 99.1413% ( 3) 00:09:32.521 43164.273 - 43374.831: 99.2117% ( 5) 00:09:32.521 43374.831 - 43585.388: 99.2821% ( 5) 00:09:32.521 43585.388 - 43795.945: 99.3384% ( 4) 00:09:32.521 43795.945 - 44006.503: 99.4088% ( 5) 00:09:32.521 44006.503 - 44217.060: 99.4651% ( 4) 00:09:32.521 44217.060 - 44427.618: 99.5355% ( 5) 00:09:32.521 44427.618 - 44638.175: 99.6199% ( 6) 00:09:32.521 44638.175 - 44848.733: 99.6903% ( 5) 00:09:32.521 44848.733 - 45059.290: 99.7607% ( 5) 00:09:32.521 45059.290 - 45269.847: 99.8311% ( 5) 00:09:32.521 45269.847 - 45480.405: 99.9015% ( 5) 00:09:32.521 45480.405 - 45690.962: 99.9859% ( 6) 00:09:32.521 45690.962 - 45901.520: 100.0000% ( 1) 00:09:32.521 00:09:32.521 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:32.521 ============================================================================== 00:09:32.521 Range in us Cumulative IO count 00:09:32.521 8843.412 - 8896.051: 0.0422% ( 3) 00:09:32.521 8896.051 - 8948.691: 0.0845% ( 3) 00:09:32.521 8948.691 - 9001.330: 0.1408% ( 4) 00:09:32.521 9001.330 - 9053.969: 0.1830% ( 3) 00:09:32.521 9053.969 - 9106.609: 0.2252% ( 3) 00:09:32.521 9106.609 - 9159.248: 0.3238% ( 7) 00:09:32.521 9159.248 - 9211.888: 0.4082% ( 6) 00:09:32.521 9211.888 - 9264.527: 0.4927% ( 6) 00:09:32.521 9264.527 - 9317.166: 0.6053% ( 8) 00:09:32.521 9317.166 - 9369.806: 0.7601% ( 11) 00:09:32.521 9369.806 - 9422.445: 0.9009% ( 10) 00:09:32.521 9422.445 - 9475.084: 1.0839% ( 13) 00:09:32.521 9475.084 - 9527.724: 1.2528% ( 12) 00:09:32.521 9527.724 - 9580.363: 1.4217% ( 12) 00:09:32.521 9580.363 - 9633.002: 1.6047% ( 13) 00:09:32.521 9633.002 - 9685.642: 1.8159% ( 15) 00:09:32.521 9685.642 - 9738.281: 2.0974% ( 20) 00:09:32.521 9738.281 - 9790.920: 2.3086% ( 15) 00:09:32.521 9790.920 - 9843.560: 2.6182% ( 22) 00:09:32.521 9843.560 - 9896.199: 2.8857% ( 19) 00:09:32.521 9896.199 - 9948.839: 3.1532% ( 19) 00:09:32.521 9948.839 - 10001.478: 3.4769% ( 23) 00:09:32.521 10001.478 - 10054.117: 3.8148% ( 24) 00:09:32.521 10054.117 - 10106.757: 4.1104% ( 21) 00:09:32.521 10106.757 - 10159.396: 4.4341% ( 23) 00:09:32.521 10159.396 - 10212.035: 4.8142% ( 27) 00:09:32.521 10212.035 - 10264.675: 5.1661% ( 25) 00:09:32.521 10264.675 - 10317.314: 5.4758% ( 22) 00:09:32.521 10317.314 - 10369.953: 5.8136% ( 24) 00:09:32.521 10369.953 - 10422.593: 6.1233% ( 22) 00:09:32.521 10422.593 - 10475.232: 6.4471% ( 23) 00:09:32.521 10475.232 - 10527.871: 6.7005% ( 18) 00:09:32.521 10527.871 - 10580.511: 6.9679% ( 19) 00:09:32.521 10580.511 - 10633.150: 7.2494% ( 20) 00:09:32.521 10633.150 - 10685.790: 7.4887% ( 17) 00:09:32.521 10685.790 - 10738.429: 7.7562% ( 19) 00:09:32.521 10738.429 - 10791.068: 7.9814% ( 16) 00:09:32.521 10791.068 - 10843.708: 8.1503% ( 12) 00:09:32.521 10843.708 - 10896.347: 8.3615% ( 15) 00:09:32.521 10896.347 - 10948.986: 8.5586% ( 14) 00:09:32.521 10948.986 - 11001.626: 8.6852% ( 9) 00:09:32.521 11001.626 - 11054.265: 8.8401% ( 11) 00:09:32.521 11054.265 - 11106.904: 9.0090% ( 12) 00:09:32.521 11106.904 - 11159.544: 9.1920% ( 13) 00:09:32.521 11159.544 - 11212.183: 9.3750% ( 13) 00:09:32.521 11212.183 - 11264.822: 9.5298% ( 11) 00:09:32.521 11264.822 - 11317.462: 9.7128% ( 13) 00:09:32.521 11317.462 - 11370.101: 9.9240% ( 15) 00:09:32.521 11370.101 - 11422.741: 10.1211% ( 14) 00:09:32.521 11422.741 - 11475.380: 10.3463% ( 16) 00:09:32.521 11475.380 - 11528.019: 10.5997% ( 18) 00:09:32.521 11528.019 - 11580.659: 10.8812% ( 20) 00:09:32.521 11580.659 - 11633.298: 11.2050% ( 23) 00:09:32.521 11633.298 - 11685.937: 11.5146% ( 22) 00:09:32.521 11685.937 - 11738.577: 11.8384% ( 23) 00:09:32.521 11738.577 - 11791.216: 12.2044% ( 26) 00:09:32.521 11791.216 - 11843.855: 12.5563% ( 25) 00:09:32.521 11843.855 - 11896.495: 12.9082% ( 25) 00:09:32.521 11896.495 - 11949.134: 13.2461% ( 24) 00:09:32.521 11949.134 - 12001.773: 13.5980% ( 25) 00:09:32.521 12001.773 - 12054.413: 14.0343% ( 31) 00:09:32.521 12054.413 - 12107.052: 14.4144% ( 27) 00:09:32.521 12107.052 - 12159.692: 14.8226% ( 29) 00:09:32.521 12159.692 - 12212.331: 15.2449% ( 30) 00:09:32.521 12212.331 - 12264.970: 15.6391% ( 28) 00:09:32.521 12264.970 - 12317.610: 16.0614% ( 30) 00:09:32.521 12317.610 - 12370.249: 16.4555% ( 28) 00:09:32.521 12370.249 - 12422.888: 16.8637% ( 29) 00:09:32.521 12422.888 - 12475.528: 17.2438% ( 27) 00:09:32.521 12475.528 - 12528.167: 17.5676% ( 23) 00:09:32.521 12528.167 - 12580.806: 17.9758% ( 29) 00:09:32.521 12580.806 - 12633.446: 18.3981% ( 30) 00:09:32.521 12633.446 - 12686.085: 18.7782% ( 27) 00:09:32.521 12686.085 - 12738.724: 19.1160% ( 24) 00:09:32.521 12738.724 - 12791.364: 19.4679% ( 25) 00:09:32.521 12791.364 - 12844.003: 19.7776% ( 22) 00:09:32.521 12844.003 - 12896.643: 20.1436% ( 26) 00:09:32.521 12896.643 - 12949.282: 20.4392% ( 21) 00:09:32.521 12949.282 - 13001.921: 20.7066% ( 19) 00:09:32.521 13001.921 - 13054.561: 21.0023% ( 21) 00:09:32.521 13054.561 - 13107.200: 21.2416% ( 17) 00:09:32.521 13107.200 - 13159.839: 21.5653% ( 23) 00:09:32.521 13159.839 - 13212.479: 21.8046% ( 17) 00:09:32.521 13212.479 - 13265.118: 22.0721% ( 19) 00:09:32.521 13265.118 - 13317.757: 22.4240% ( 25) 00:09:32.521 13317.757 - 13370.397: 22.7759% ( 25) 00:09:32.521 13370.397 - 13423.036: 23.0997% ( 23) 00:09:32.521 13423.036 - 13475.676: 23.4938% ( 28) 00:09:32.521 13475.676 - 13580.954: 24.2258% ( 52) 00:09:32.521 13580.954 - 13686.233: 24.9296% ( 50) 00:09:32.521 13686.233 - 13791.512: 25.7038% ( 55) 00:09:32.521 13791.512 - 13896.790: 26.3936% ( 49) 00:09:32.521 13896.790 - 14002.069: 27.0411% ( 46) 00:09:32.521 14002.069 - 14107.348: 27.6886% ( 46) 00:09:32.521 14107.348 - 14212.627: 28.4628% ( 55) 00:09:32.521 14212.627 - 14317.905: 29.2511% ( 56) 00:09:32.521 14317.905 - 14423.184: 30.0816% ( 59) 00:09:32.521 14423.184 - 14528.463: 30.9122% ( 59) 00:09:32.521 14528.463 - 14633.741: 31.7568% ( 60) 00:09:32.521 14633.741 - 14739.020: 32.5873% ( 59) 00:09:32.521 14739.020 - 14844.299: 33.4882% ( 64) 00:09:32.522 14844.299 - 14949.578: 34.2202% ( 52) 00:09:32.522 14949.578 - 15054.856: 34.9381% ( 51) 00:09:32.522 15054.856 - 15160.135: 35.6841% ( 53) 00:09:32.522 15160.135 - 15265.414: 36.3880% ( 50) 00:09:32.522 15265.414 - 15370.692: 37.1340% ( 53) 00:09:32.522 15370.692 - 15475.971: 37.7675% ( 45) 00:09:32.522 15475.971 - 15581.250: 38.3727% ( 43) 00:09:32.522 15581.250 - 15686.529: 39.0062% ( 45) 00:09:32.522 15686.529 - 15791.807: 39.6678% ( 47) 00:09:32.522 15791.807 - 15897.086: 40.1745% ( 36) 00:09:32.522 15897.086 - 16002.365: 40.6813% ( 36) 00:09:32.522 16002.365 - 16107.643: 41.1740% ( 35) 00:09:32.522 16107.643 - 16212.922: 41.6385% ( 33) 00:09:32.522 16212.922 - 16318.201: 42.1875% ( 39) 00:09:32.522 16318.201 - 16423.480: 42.7787% ( 42) 00:09:32.522 16423.480 - 16528.758: 43.3840% ( 43) 00:09:32.522 16528.758 - 16634.037: 44.0034% ( 44) 00:09:32.522 16634.037 - 16739.316: 44.6931% ( 49) 00:09:32.522 16739.316 - 16844.594: 45.4110% ( 51) 00:09:32.522 16844.594 - 16949.873: 46.1430% ( 52) 00:09:32.522 16949.873 - 17055.152: 46.8468% ( 50) 00:09:32.522 17055.152 - 17160.431: 47.5507% ( 50) 00:09:32.522 17160.431 - 17265.709: 48.2686% ( 51) 00:09:32.522 17265.709 - 17370.988: 49.0709% ( 57) 00:09:32.522 17370.988 - 17476.267: 49.9015% ( 59) 00:09:32.522 17476.267 - 17581.545: 50.6334% ( 52) 00:09:32.522 17581.545 - 17686.824: 51.2247% ( 42) 00:09:32.522 17686.824 - 17792.103: 51.6610% ( 31) 00:09:32.522 17792.103 - 17897.382: 52.0974% ( 31) 00:09:32.522 17897.382 - 18002.660: 52.5901% ( 35) 00:09:32.522 18002.660 - 18107.939: 53.1391% ( 39) 00:09:32.522 18107.939 - 18213.218: 53.6599% ( 37) 00:09:32.522 18213.218 - 18318.496: 54.1948% ( 38) 00:09:32.522 18318.496 - 18423.775: 54.7720% ( 41) 00:09:32.522 18423.775 - 18529.054: 55.4336% ( 47) 00:09:32.522 18529.054 - 18634.333: 56.0389% ( 43) 00:09:32.522 18634.333 - 18739.611: 56.6864% ( 46) 00:09:32.522 18739.611 - 18844.890: 57.3620% ( 48) 00:09:32.522 18844.890 - 18950.169: 57.9673% ( 43) 00:09:32.522 18950.169 - 19055.447: 58.6430% ( 48) 00:09:32.522 19055.447 - 19160.726: 59.1920% ( 39) 00:09:32.522 19160.726 - 19266.005: 59.7973% ( 43) 00:09:32.522 19266.005 - 19371.284: 60.4026% ( 43) 00:09:32.522 19371.284 - 19476.562: 61.0079% ( 43) 00:09:32.522 19476.562 - 19581.841: 61.5569% ( 39) 00:09:32.522 19581.841 - 19687.120: 62.2185% ( 47) 00:09:32.522 19687.120 - 19792.398: 62.8801% ( 47) 00:09:32.522 19792.398 - 19897.677: 63.6543% ( 55) 00:09:32.522 19897.677 - 20002.956: 64.4003% ( 53) 00:09:32.522 20002.956 - 20108.235: 65.1886% ( 56) 00:09:32.522 20108.235 - 20213.513: 65.9065% ( 51) 00:09:32.522 20213.513 - 20318.792: 66.5681% ( 47) 00:09:32.522 20318.792 - 20424.071: 67.2860% ( 51) 00:09:32.522 20424.071 - 20529.349: 67.9476% ( 47) 00:09:32.522 20529.349 - 20634.628: 68.6796% ( 52) 00:09:32.522 20634.628 - 20739.907: 69.3834% ( 50) 00:09:32.522 20739.907 - 20845.186: 70.0873% ( 50) 00:09:32.522 20845.186 - 20950.464: 70.8333% ( 53) 00:09:32.522 20950.464 - 21055.743: 71.5653% ( 52) 00:09:32.522 21055.743 - 21161.022: 72.3677% ( 57) 00:09:32.522 21161.022 - 21266.300: 73.1419% ( 55) 00:09:32.522 21266.300 - 21371.579: 74.0006% ( 61) 00:09:32.522 21371.579 - 21476.858: 74.7889% ( 56) 00:09:32.522 21476.858 - 21582.137: 75.5208% ( 52) 00:09:32.522 21582.137 - 21687.415: 76.1824% ( 47) 00:09:32.522 21687.415 - 21792.694: 76.9426% ( 54) 00:09:32.522 21792.694 - 21897.973: 77.6042% ( 47) 00:09:32.522 21897.973 - 22003.251: 78.2798% ( 48) 00:09:32.522 22003.251 - 22108.530: 78.9555% ( 48) 00:09:32.522 22108.530 - 22213.809: 79.5749% ( 44) 00:09:32.522 22213.809 - 22319.088: 80.0957% ( 37) 00:09:32.522 22319.088 - 22424.366: 80.6588% ( 40) 00:09:32.522 22424.366 - 22529.645: 81.2500% ( 42) 00:09:32.522 22529.645 - 22634.924: 81.7708% ( 37) 00:09:32.522 22634.924 - 22740.202: 82.2494% ( 34) 00:09:32.522 22740.202 - 22845.481: 82.7140% ( 33) 00:09:32.522 22845.481 - 22950.760: 83.1644% ( 32) 00:09:32.522 22950.760 - 23056.039: 83.6289% ( 33) 00:09:32.522 23056.039 - 23161.317: 84.0653% ( 31) 00:09:32.522 23161.317 - 23266.596: 84.5017% ( 31) 00:09:32.522 23266.596 - 23371.875: 84.9240% ( 30) 00:09:32.522 23371.875 - 23477.153: 85.3322% ( 29) 00:09:32.522 23477.153 - 23582.432: 85.6560% ( 23) 00:09:32.522 23582.432 - 23687.711: 85.9516% ( 21) 00:09:32.522 23687.711 - 23792.990: 86.3035% ( 25) 00:09:32.522 23792.990 - 23898.268: 86.6554% ( 25) 00:09:32.522 23898.268 - 24003.547: 86.9229% ( 19) 00:09:32.522 24003.547 - 24108.826: 87.2607% ( 24) 00:09:32.522 24108.826 - 24214.104: 87.4718% ( 15) 00:09:32.522 24214.104 - 24319.383: 87.6971% ( 16) 00:09:32.522 24319.383 - 24424.662: 87.8941% ( 14) 00:09:32.522 24424.662 - 24529.941: 88.1616% ( 19) 00:09:32.522 24529.941 - 24635.219: 88.3727% ( 15) 00:09:32.522 24635.219 - 24740.498: 88.6120% ( 17) 00:09:32.522 24740.498 - 24845.777: 88.8654% ( 18) 00:09:32.522 24845.777 - 24951.055: 89.1188% ( 18) 00:09:32.522 24951.055 - 25056.334: 89.3440% ( 16) 00:09:32.522 25056.334 - 25161.613: 89.7241% ( 27) 00:09:32.522 25161.613 - 25266.892: 90.0197% ( 21) 00:09:32.522 25266.892 - 25372.170: 90.3998% ( 27) 00:09:32.522 25372.170 - 25477.449: 90.7798% ( 27) 00:09:32.522 25477.449 - 25582.728: 91.0895% ( 22) 00:09:32.522 25582.728 - 25688.006: 91.4274% ( 24) 00:09:32.522 25688.006 - 25793.285: 91.7793% ( 25) 00:09:32.522 25793.285 - 25898.564: 92.0467% ( 19) 00:09:32.522 25898.564 - 26003.843: 92.3001% ( 18) 00:09:32.522 26003.843 - 26109.121: 92.5253% ( 16) 00:09:32.522 26109.121 - 26214.400: 92.7787% ( 18) 00:09:32.522 26214.400 - 26319.679: 93.0039% ( 16) 00:09:32.522 26319.679 - 26424.957: 93.2151% ( 15) 00:09:32.522 26424.957 - 26530.236: 93.4403% ( 16) 00:09:32.522 26530.236 - 26635.515: 93.6374% ( 14) 00:09:32.522 26635.515 - 26740.794: 93.8204% ( 13) 00:09:32.522 26740.794 - 26846.072: 93.9893% ( 12) 00:09:32.522 26846.072 - 26951.351: 94.1582% ( 12) 00:09:32.522 26951.351 - 27161.908: 94.5101% ( 25) 00:09:32.522 27161.908 - 27372.466: 94.8339% ( 23) 00:09:32.522 27372.466 - 27583.023: 95.2140% ( 27) 00:09:32.522 27583.023 - 27793.581: 95.6081% ( 28) 00:09:32.522 27793.581 - 28004.138: 95.9741% ( 26) 00:09:32.522 28004.138 - 28214.696: 96.3260% ( 25) 00:09:32.522 28214.696 - 28425.253: 96.6639% ( 24) 00:09:32.522 28425.253 - 28635.810: 97.0017% ( 24) 00:09:32.522 28635.810 - 28846.368: 97.2832% ( 20) 00:09:32.522 28846.368 - 29056.925: 97.5084% ( 16) 00:09:32.522 29056.925 - 29267.483: 97.6492% ( 10) 00:09:32.522 29267.483 - 29478.040: 97.7337% ( 6) 00:09:32.522 29478.040 - 29688.598: 97.7759% ( 3) 00:09:32.522 29688.598 - 29899.155: 97.8322% ( 4) 00:09:32.522 29899.155 - 30109.712: 97.8744% ( 3) 00:09:32.522 30109.712 - 30320.270: 97.9167% ( 3) 00:09:32.522 30320.270 - 30530.827: 97.9730% ( 4) 00:09:32.522 30530.827 - 30741.385: 98.0434% ( 5) 00:09:32.522 30741.385 - 30951.942: 98.1700% ( 9) 00:09:32.522 30951.942 - 31162.500: 98.2827% ( 8) 00:09:32.522 31162.500 - 31373.057: 98.3812% ( 7) 00:09:32.522 31373.057 - 31583.614: 98.5079% ( 9) 00:09:32.522 31583.614 - 31794.172: 98.5783% ( 5) 00:09:32.522 31794.172 - 32004.729: 98.6486% ( 5) 00:09:32.522 32004.729 - 32215.287: 98.7331% ( 6) 00:09:32.522 32215.287 - 32425.844: 98.7894% ( 4) 00:09:32.522 32425.844 - 32636.402: 98.8598% ( 5) 00:09:32.522 32636.402 - 32846.959: 98.9443% ( 6) 00:09:32.522 32846.959 - 33057.516: 99.0146% ( 5) 00:09:32.522 33057.516 - 33268.074: 99.0991% ( 6) 00:09:32.522 40216.469 - 40427.027: 99.1413% ( 3) 00:09:32.522 40427.027 - 40637.584: 99.2258% ( 6) 00:09:32.522 40637.584 - 40848.141: 99.2962% ( 5) 00:09:32.522 40848.141 - 41058.699: 99.3666% ( 5) 00:09:32.522 41058.699 - 41269.256: 99.4369% ( 5) 00:09:32.522 41269.256 - 41479.814: 99.5073% ( 5) 00:09:32.522 41479.814 - 41690.371: 99.5777% ( 5) 00:09:32.522 41690.371 - 41900.929: 99.6481% ( 5) 00:09:32.522 41900.929 - 42111.486: 99.7185% ( 5) 00:09:32.522 42111.486 - 42322.043: 99.7889% ( 5) 00:09:32.522 42322.043 - 42532.601: 99.8592% ( 5) 00:09:32.522 42532.601 - 42743.158: 99.9155% ( 4) 00:09:32.522 42743.158 - 42953.716: 100.0000% ( 6) 00:09:32.522 00:09:32.522 15:02:33 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:09:33.903 Initializing NVMe Controllers 00:09:33.903 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:33.903 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:33.903 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:33.903 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:33.903 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:33.903 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:33.903 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:33.903 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:33.903 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:33.903 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:33.903 Initialization complete. Launching workers. 00:09:33.903 ======================================================== 00:09:33.903 Latency(us) 00:09:33.903 Device Information : IOPS MiB/s Average min max 00:09:33.903 PCIE (0000:00:10.0) NSID 1 from core 0: 6644.82 77.87 19319.44 12044.78 44349.73 00:09:33.903 PCIE (0000:00:11.0) NSID 1 from core 0: 6644.82 77.87 19279.06 12458.89 41312.09 00:09:33.903 PCIE (0000:00:13.0) NSID 1 from core 0: 6644.82 77.87 19239.12 12438.15 39581.36 00:09:33.903 PCIE (0000:00:12.0) NSID 1 from core 0: 6644.82 77.87 19196.26 12224.23 36792.49 00:09:33.903 PCIE (0000:00:12.0) NSID 2 from core 0: 6644.82 77.87 19150.44 12370.46 33721.77 00:09:33.903 PCIE (0000:00:12.0) NSID 3 from core 0: 6644.82 77.87 19104.74 12791.16 31740.13 00:09:33.903 ======================================================== 00:09:33.903 Total : 39868.90 467.21 19214.84 12044.78 44349.73 00:09:33.903 00:09:33.903 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:33.903 ================================================================================= 00:09:33.903 1.00000% : 12949.282us 00:09:33.903 10.00000% : 15265.414us 00:09:33.903 25.00000% : 16528.758us 00:09:33.903 50.00000% : 19055.447us 00:09:33.903 75.00000% : 21371.579us 00:09:33.903 90.00000% : 23056.039us 00:09:33.903 95.00000% : 25161.613us 00:09:33.903 98.00000% : 28214.696us 00:09:33.903 99.00000% : 34320.861us 00:09:33.903 99.50000% : 43164.273us 00:09:33.903 99.90000% : 44217.060us 00:09:33.903 99.99000% : 44427.618us 00:09:33.903 99.99900% : 44427.618us 00:09:33.903 99.99990% : 44427.618us 00:09:33.903 99.99999% : 44427.618us 00:09:33.903 00:09:33.903 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:33.903 ================================================================================= 00:09:33.903 1.00000% : 12844.003us 00:09:33.903 10.00000% : 15265.414us 00:09:33.903 25.00000% : 16739.316us 00:09:33.903 50.00000% : 18844.890us 00:09:33.903 75.00000% : 21476.858us 00:09:33.903 90.00000% : 23161.317us 00:09:33.903 95.00000% : 24740.498us 00:09:33.903 98.00000% : 28425.253us 00:09:33.903 99.00000% : 32846.959us 00:09:33.903 99.50000% : 40216.469us 00:09:33.903 99.90000% : 41269.256us 00:09:33.903 99.99000% : 41479.814us 00:09:33.903 99.99900% : 41479.814us 00:09:33.903 99.99990% : 41479.814us 00:09:33.903 99.99999% : 41479.814us 00:09:33.903 00:09:33.903 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:33.903 ================================================================================= 00:09:33.903 1.00000% : 13212.479us 00:09:33.904 10.00000% : 15370.692us 00:09:33.904 25.00000% : 16739.316us 00:09:33.904 50.00000% : 18844.890us 00:09:33.904 75.00000% : 21476.858us 00:09:33.904 90.00000% : 23371.875us 00:09:33.904 95.00000% : 24635.219us 00:09:33.904 98.00000% : 26424.957us 00:09:33.904 99.00000% : 30951.942us 00:09:33.904 99.50000% : 38532.010us 00:09:33.904 99.90000% : 39374.239us 00:09:33.904 99.99000% : 39584.797us 00:09:33.904 99.99900% : 39584.797us 00:09:33.904 99.99990% : 39584.797us 00:09:33.904 99.99999% : 39584.797us 00:09:33.904 00:09:33.904 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:33.904 ================================================================================= 00:09:33.904 1.00000% : 13107.200us 00:09:33.904 10.00000% : 15475.971us 00:09:33.904 25.00000% : 16634.037us 00:09:33.904 50.00000% : 19055.447us 00:09:33.904 75.00000% : 21266.300us 00:09:33.904 90.00000% : 23161.317us 00:09:33.904 95.00000% : 24424.662us 00:09:33.904 98.00000% : 26319.679us 00:09:33.904 99.00000% : 28004.138us 00:09:33.904 99.50000% : 35794.763us 00:09:33.904 99.90000% : 36636.993us 00:09:33.904 99.99000% : 36847.550us 00:09:33.904 99.99900% : 36847.550us 00:09:33.904 99.99990% : 36847.550us 00:09:33.904 99.99999% : 36847.550us 00:09:33.904 00:09:33.904 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:33.904 ================================================================================= 00:09:33.904 1.00000% : 13054.561us 00:09:33.904 10.00000% : 15370.692us 00:09:33.904 25.00000% : 16634.037us 00:09:33.904 50.00000% : 19160.726us 00:09:33.904 75.00000% : 21266.300us 00:09:33.904 90.00000% : 23371.875us 00:09:33.904 95.00000% : 24635.219us 00:09:33.904 98.00000% : 25898.564us 00:09:33.904 99.00000% : 27161.908us 00:09:33.904 99.50000% : 32425.844us 00:09:33.904 99.90000% : 33689.189us 00:09:33.904 99.99000% : 33899.746us 00:09:33.904 99.99900% : 33899.746us 00:09:33.904 99.99990% : 33899.746us 00:09:33.904 99.99999% : 33899.746us 00:09:33.904 00:09:33.904 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:33.904 ================================================================================= 00:09:33.904 1.00000% : 13317.757us 00:09:33.904 10.00000% : 15581.250us 00:09:33.904 25.00000% : 16634.037us 00:09:33.904 50.00000% : 19160.726us 00:09:33.904 75.00000% : 21055.743us 00:09:33.904 90.00000% : 22740.202us 00:09:33.904 95.00000% : 24635.219us 00:09:33.904 98.00000% : 26319.679us 00:09:33.904 99.00000% : 28004.138us 00:09:33.904 99.50000% : 30741.385us 00:09:33.904 99.90000% : 31583.614us 00:09:33.904 99.99000% : 31794.172us 00:09:33.904 99.99900% : 31794.172us 00:09:33.904 99.99990% : 31794.172us 00:09:33.904 99.99999% : 31794.172us 00:09:33.904 00:09:33.904 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:33.904 ============================================================================== 00:09:33.904 Range in us Cumulative IO count 00:09:33.904 12001.773 - 12054.413: 0.0150% ( 1) 00:09:33.904 12054.413 - 12107.052: 0.0451% ( 2) 00:09:33.904 12107.052 - 12159.692: 0.1502% ( 7) 00:09:33.904 12159.692 - 12212.331: 0.2103% ( 4) 00:09:33.904 12212.331 - 12264.970: 0.2404% ( 2) 00:09:33.904 12264.970 - 12317.610: 0.3305% ( 6) 00:09:33.904 12317.610 - 12370.249: 0.3606% ( 2) 00:09:33.904 12370.249 - 12422.888: 0.3906% ( 2) 00:09:33.904 12528.167 - 12580.806: 0.4056% ( 1) 00:09:33.904 12580.806 - 12633.446: 0.4207% ( 1) 00:09:33.904 12633.446 - 12686.085: 0.4657% ( 3) 00:09:33.904 12686.085 - 12738.724: 0.5409% ( 5) 00:09:33.904 12738.724 - 12791.364: 0.6611% ( 8) 00:09:33.904 12791.364 - 12844.003: 0.7512% ( 6) 00:09:33.904 12844.003 - 12896.643: 0.9465% ( 13) 00:09:33.904 12896.643 - 12949.282: 1.0367% ( 6) 00:09:33.904 12949.282 - 13001.921: 1.0817% ( 3) 00:09:33.904 13001.921 - 13054.561: 1.1418% ( 4) 00:09:33.904 13054.561 - 13107.200: 1.2169% ( 5) 00:09:33.904 13107.200 - 13159.839: 1.2620% ( 3) 00:09:33.904 13159.839 - 13212.479: 1.3371% ( 5) 00:09:33.904 13212.479 - 13265.118: 1.4123% ( 5) 00:09:33.904 13265.118 - 13317.757: 1.4874% ( 5) 00:09:33.904 13317.757 - 13370.397: 1.5475% ( 4) 00:09:33.904 13370.397 - 13423.036: 1.6226% ( 5) 00:09:33.904 13423.036 - 13475.676: 1.7278% ( 7) 00:09:33.904 13475.676 - 13580.954: 1.8780% ( 10) 00:09:33.904 13580.954 - 13686.233: 2.0433% ( 11) 00:09:33.904 13686.233 - 13791.512: 2.2837% ( 16) 00:09:33.904 13791.512 - 13896.790: 2.5541% ( 18) 00:09:33.904 13896.790 - 14002.069: 2.8546% ( 20) 00:09:33.904 14002.069 - 14107.348: 3.1100% ( 17) 00:09:33.904 14107.348 - 14212.627: 3.5757% ( 31) 00:09:33.904 14212.627 - 14317.905: 4.4020% ( 55) 00:09:33.904 14317.905 - 14423.184: 4.9429% ( 36) 00:09:33.904 14423.184 - 14528.463: 5.5739% ( 42) 00:09:33.904 14528.463 - 14633.741: 6.0847% ( 34) 00:09:33.904 14633.741 - 14739.020: 6.5204% ( 29) 00:09:33.904 14739.020 - 14844.299: 7.0613% ( 36) 00:09:33.904 14844.299 - 14949.578: 8.0078% ( 63) 00:09:33.904 14949.578 - 15054.856: 9.2698% ( 84) 00:09:33.904 15054.856 - 15160.135: 9.9760% ( 47) 00:09:33.904 15160.135 - 15265.414: 10.7873% ( 54) 00:09:33.904 15265.414 - 15370.692: 11.5535% ( 51) 00:09:33.904 15370.692 - 15475.971: 12.6202% ( 71) 00:09:33.904 15475.971 - 15581.250: 14.1226% ( 100) 00:09:33.904 15581.250 - 15686.529: 15.2344% ( 74) 00:09:33.904 15686.529 - 15791.807: 16.6316% ( 93) 00:09:33.904 15791.807 - 15897.086: 17.9387% ( 87) 00:09:33.904 15897.086 - 16002.365: 19.0956% ( 77) 00:09:33.904 16002.365 - 16107.643: 20.2374% ( 76) 00:09:33.904 16107.643 - 16212.922: 21.2891% ( 70) 00:09:33.904 16212.922 - 16318.201: 22.4910% ( 80) 00:09:33.904 16318.201 - 16423.480: 23.9633% ( 98) 00:09:33.904 16423.480 - 16528.758: 25.1352% ( 78) 00:09:33.904 16528.758 - 16634.037: 26.2921% ( 77) 00:09:33.904 16634.037 - 16739.316: 27.2386% ( 63) 00:09:33.904 16739.316 - 16844.594: 28.0499% ( 54) 00:09:33.904 16844.594 - 16949.873: 29.1617% ( 74) 00:09:33.904 16949.873 - 17055.152: 30.3185% ( 77) 00:09:33.904 17055.152 - 17160.431: 31.3401% ( 68) 00:09:33.904 17160.431 - 17265.709: 32.4970% ( 77) 00:09:33.904 17265.709 - 17370.988: 33.9093% ( 94) 00:09:33.904 17370.988 - 17476.267: 35.0962% ( 79) 00:09:33.904 17476.267 - 17581.545: 36.0877% ( 66) 00:09:33.904 17581.545 - 17686.824: 36.9441% ( 57) 00:09:33.904 17686.824 - 17792.103: 37.8756% ( 62) 00:09:33.904 17792.103 - 17897.382: 38.8672% ( 66) 00:09:33.904 17897.382 - 18002.660: 39.9339% ( 71) 00:09:33.904 18002.660 - 18107.939: 40.8654% ( 62) 00:09:33.904 18107.939 - 18213.218: 41.7819% ( 61) 00:09:33.904 18213.218 - 18318.496: 42.8185% ( 69) 00:09:33.904 18318.496 - 18423.775: 43.6599% ( 56) 00:09:33.904 18423.775 - 18529.054: 44.6665% ( 67) 00:09:33.904 18529.054 - 18634.333: 45.9886% ( 88) 00:09:33.904 18634.333 - 18739.611: 47.1304% ( 76) 00:09:33.904 18739.611 - 18844.890: 48.4075% ( 85) 00:09:33.904 18844.890 - 18950.169: 49.8047% ( 93) 00:09:33.904 18950.169 - 19055.447: 51.1869% ( 92) 00:09:33.904 19055.447 - 19160.726: 52.2536% ( 71) 00:09:33.904 19160.726 - 19266.005: 53.4555% ( 80) 00:09:33.904 19266.005 - 19371.284: 54.6274% ( 78) 00:09:33.904 19371.284 - 19476.562: 55.7843% ( 77) 00:09:33.904 19476.562 - 19581.841: 56.8510% ( 71) 00:09:33.904 19581.841 - 19687.120: 57.8876% ( 69) 00:09:33.904 19687.120 - 19792.398: 58.9994% ( 74) 00:09:33.904 19792.398 - 19897.677: 60.1863% ( 79) 00:09:33.904 19897.677 - 20002.956: 61.4784% ( 86) 00:09:33.904 20002.956 - 20108.235: 62.8756% ( 93) 00:09:33.904 20108.235 - 20213.513: 63.9273% ( 70) 00:09:33.904 20213.513 - 20318.792: 64.9339% ( 67) 00:09:33.904 20318.792 - 20424.071: 65.9555% ( 68) 00:09:33.904 20424.071 - 20529.349: 67.2927% ( 89) 00:09:33.904 20529.349 - 20634.628: 68.4345% ( 76) 00:09:33.904 20634.628 - 20739.907: 69.3510% ( 61) 00:09:33.904 20739.907 - 20845.186: 70.4327% ( 72) 00:09:33.904 20845.186 - 20950.464: 71.4994% ( 71) 00:09:33.904 20950.464 - 21055.743: 72.5361% ( 69) 00:09:33.904 21055.743 - 21161.022: 73.4525% ( 61) 00:09:33.904 21161.022 - 21266.300: 74.5192% ( 71) 00:09:33.904 21266.300 - 21371.579: 75.4507% ( 62) 00:09:33.904 21371.579 - 21476.858: 76.4423% ( 66) 00:09:33.904 21476.858 - 21582.137: 77.4639% ( 68) 00:09:33.904 21582.137 - 21687.415: 78.8311% ( 91) 00:09:33.904 21687.415 - 21792.694: 80.0030% ( 78) 00:09:33.904 21792.694 - 21897.973: 81.1298% ( 75) 00:09:33.904 21897.973 - 22003.251: 82.0763% ( 63) 00:09:33.904 22003.251 - 22108.530: 83.0829% ( 67) 00:09:33.904 22108.530 - 22213.809: 84.1346% ( 70) 00:09:33.904 22213.809 - 22319.088: 84.9760% ( 56) 00:09:33.904 22319.088 - 22424.366: 85.8474% ( 58) 00:09:33.904 22424.366 - 22529.645: 86.7188% ( 58) 00:09:33.904 22529.645 - 22634.924: 87.4549% ( 49) 00:09:33.904 22634.924 - 22740.202: 88.2662% ( 54) 00:09:33.904 22740.202 - 22845.481: 88.9874% ( 48) 00:09:33.904 22845.481 - 22950.760: 89.5883% ( 40) 00:09:33.904 22950.760 - 23056.039: 90.5048% ( 61) 00:09:33.904 23056.039 - 23161.317: 91.0457% ( 36) 00:09:33.904 23161.317 - 23266.596: 91.5565% ( 34) 00:09:33.904 23266.596 - 23371.875: 91.9621% ( 27) 00:09:33.904 23371.875 - 23477.153: 92.5030% ( 36) 00:09:33.904 23477.153 - 23582.432: 92.8035% ( 20) 00:09:33.904 23582.432 - 23687.711: 93.1340% ( 22) 00:09:33.904 23687.711 - 23792.990: 93.4195% ( 19) 00:09:33.904 23792.990 - 23898.268: 93.6599% ( 16) 00:09:33.904 23898.268 - 24003.547: 93.8101% ( 10) 00:09:33.904 24003.547 - 24108.826: 93.9303% ( 8) 00:09:33.905 24108.826 - 24214.104: 94.0505% ( 8) 00:09:33.905 24214.104 - 24319.383: 94.1106% ( 4) 00:09:33.905 24319.383 - 24424.662: 94.2458% ( 9) 00:09:33.905 24424.662 - 24529.941: 94.3660% ( 8) 00:09:33.905 24529.941 - 24635.219: 94.4411% ( 5) 00:09:33.905 24635.219 - 24740.498: 94.5312% ( 6) 00:09:33.905 24740.498 - 24845.777: 94.6064% ( 5) 00:09:33.905 24845.777 - 24951.055: 94.7416% ( 9) 00:09:33.905 24951.055 - 25056.334: 94.8918% ( 10) 00:09:33.905 25056.334 - 25161.613: 95.0120% ( 8) 00:09:33.905 25161.613 - 25266.892: 95.1773% ( 11) 00:09:33.905 25266.892 - 25372.170: 95.3425% ( 11) 00:09:33.905 25372.170 - 25477.449: 95.5829% ( 16) 00:09:33.905 25477.449 - 25582.728: 95.6881% ( 7) 00:09:33.905 25582.728 - 25688.006: 95.7332% ( 3) 00:09:33.905 25688.006 - 25793.285: 95.8233% ( 6) 00:09:33.905 25793.285 - 25898.564: 95.8684% ( 3) 00:09:33.905 25898.564 - 26003.843: 95.9135% ( 3) 00:09:33.905 26003.843 - 26109.121: 96.0186% ( 7) 00:09:33.905 26109.121 - 26214.400: 96.1238% ( 7) 00:09:33.905 26214.400 - 26319.679: 96.2590% ( 9) 00:09:33.905 26319.679 - 26424.957: 96.3492% ( 6) 00:09:33.905 26424.957 - 26530.236: 96.4844% ( 9) 00:09:33.905 26530.236 - 26635.515: 96.6196% ( 9) 00:09:33.905 26635.515 - 26740.794: 96.7097% ( 6) 00:09:33.905 26740.794 - 26846.072: 96.8299% ( 8) 00:09:33.905 26846.072 - 26951.351: 96.9501% ( 8) 00:09:33.905 26951.351 - 27161.908: 97.1755% ( 15) 00:09:33.905 27161.908 - 27372.466: 97.3257% ( 10) 00:09:33.905 27372.466 - 27583.023: 97.5060% ( 12) 00:09:33.905 27583.023 - 27793.581: 97.6713% ( 11) 00:09:33.905 27793.581 - 28004.138: 97.8516% ( 12) 00:09:33.905 28004.138 - 28214.696: 98.0168% ( 11) 00:09:33.905 28214.696 - 28425.253: 98.0769% ( 4) 00:09:33.905 31583.614 - 31794.172: 98.0919% ( 1) 00:09:33.905 31794.172 - 32004.729: 98.1671% ( 5) 00:09:33.905 32004.729 - 32215.287: 98.2572% ( 6) 00:09:33.905 32215.287 - 32425.844: 98.3474% ( 6) 00:09:33.905 32425.844 - 32636.402: 98.4225% ( 5) 00:09:33.905 32636.402 - 32846.959: 98.5126% ( 6) 00:09:33.905 32846.959 - 33057.516: 98.5877% ( 5) 00:09:33.905 33057.516 - 33268.074: 98.6629% ( 5) 00:09:33.905 33268.074 - 33478.631: 98.7380% ( 5) 00:09:33.905 33478.631 - 33689.189: 98.8131% ( 5) 00:09:33.905 33689.189 - 33899.746: 98.8882% ( 5) 00:09:33.905 33899.746 - 34110.304: 98.9784% ( 6) 00:09:33.905 34110.304 - 34320.861: 99.0385% ( 4) 00:09:33.905 41690.371 - 41900.929: 99.0986% ( 4) 00:09:33.905 41900.929 - 42111.486: 99.1887% ( 6) 00:09:33.905 42111.486 - 42322.043: 99.2638% ( 5) 00:09:33.905 42322.043 - 42532.601: 99.3540% ( 6) 00:09:33.905 42532.601 - 42743.158: 99.3840% ( 2) 00:09:33.905 42743.158 - 42953.716: 99.4742% ( 6) 00:09:33.905 42953.716 - 43164.273: 99.5493% ( 5) 00:09:33.905 43164.273 - 43374.831: 99.6394% ( 6) 00:09:33.905 43374.831 - 43585.388: 99.7296% ( 6) 00:09:33.905 43585.388 - 43795.945: 99.8047% ( 5) 00:09:33.905 43795.945 - 44006.503: 99.8798% ( 5) 00:09:33.905 44006.503 - 44217.060: 99.9700% ( 6) 00:09:33.905 44217.060 - 44427.618: 100.0000% ( 2) 00:09:33.905 00:09:33.905 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:33.905 ============================================================================== 00:09:33.905 Range in us Cumulative IO count 00:09:33.905 12422.888 - 12475.528: 0.0751% ( 5) 00:09:33.905 12475.528 - 12528.167: 0.2404% ( 11) 00:09:33.905 12528.167 - 12580.806: 0.4507% ( 14) 00:09:33.905 12580.806 - 12633.446: 0.5859% ( 9) 00:09:33.905 12633.446 - 12686.085: 0.7212% ( 9) 00:09:33.905 12686.085 - 12738.724: 0.8413% ( 8) 00:09:33.905 12738.724 - 12791.364: 0.9315% ( 6) 00:09:33.905 12791.364 - 12844.003: 1.0066% ( 5) 00:09:33.905 12844.003 - 12896.643: 1.0667% ( 4) 00:09:33.905 12896.643 - 12949.282: 1.1268% ( 4) 00:09:33.905 12949.282 - 13001.921: 1.1869% ( 4) 00:09:33.905 13001.921 - 13054.561: 1.2921% ( 7) 00:09:33.905 13054.561 - 13107.200: 1.3822% ( 6) 00:09:33.905 13107.200 - 13159.839: 1.5024% ( 8) 00:09:33.905 13159.839 - 13212.479: 1.6076% ( 7) 00:09:33.905 13212.479 - 13265.118: 1.6977% ( 6) 00:09:33.905 13265.118 - 13317.757: 1.7428% ( 3) 00:09:33.905 13317.757 - 13370.397: 1.8329% ( 6) 00:09:33.905 13370.397 - 13423.036: 1.9081% ( 5) 00:09:33.905 13423.036 - 13475.676: 1.9681% ( 4) 00:09:33.905 13475.676 - 13580.954: 2.1034% ( 9) 00:09:33.905 13580.954 - 13686.233: 2.2536% ( 10) 00:09:33.905 13686.233 - 13791.512: 2.4489% ( 13) 00:09:33.905 13791.512 - 13896.790: 2.8245% ( 25) 00:09:33.905 13896.790 - 14002.069: 3.3954% ( 38) 00:09:33.905 14002.069 - 14107.348: 3.9513% ( 37) 00:09:33.905 14107.348 - 14212.627: 4.2668% ( 21) 00:09:33.905 14212.627 - 14317.905: 4.4772% ( 14) 00:09:33.905 14317.905 - 14423.184: 4.7175% ( 16) 00:09:33.905 14423.184 - 14528.463: 4.9579% ( 16) 00:09:33.905 14528.463 - 14633.741: 5.3035% ( 23) 00:09:33.905 14633.741 - 14739.020: 5.8744% ( 38) 00:09:33.905 14739.020 - 14844.299: 6.5054% ( 42) 00:09:33.905 14844.299 - 14949.578: 7.3918% ( 59) 00:09:33.905 14949.578 - 15054.856: 8.1731% ( 52) 00:09:33.905 15054.856 - 15160.135: 9.2698% ( 73) 00:09:33.905 15160.135 - 15265.414: 10.2614% ( 66) 00:09:33.905 15265.414 - 15370.692: 11.4183% ( 77) 00:09:33.905 15370.692 - 15475.971: 12.4850% ( 71) 00:09:33.905 15475.971 - 15581.250: 13.8221% ( 89) 00:09:33.905 15581.250 - 15686.529: 15.0841% ( 84) 00:09:33.905 15686.529 - 15791.807: 16.1508% ( 71) 00:09:33.905 15791.807 - 15897.086: 17.3227% ( 78) 00:09:33.905 15897.086 - 16002.365: 18.5246% ( 80) 00:09:33.905 16002.365 - 16107.643: 19.5162% ( 66) 00:09:33.905 16107.643 - 16212.922: 20.7332% ( 81) 00:09:33.905 16212.922 - 16318.201: 21.9050% ( 78) 00:09:33.905 16318.201 - 16423.480: 23.0319% ( 75) 00:09:33.905 16423.480 - 16528.758: 23.9633% ( 62) 00:09:33.905 16528.758 - 16634.037: 24.9549% ( 66) 00:09:33.905 16634.037 - 16739.316: 26.0667% ( 74) 00:09:33.905 16739.316 - 16844.594: 27.1484% ( 72) 00:09:33.905 16844.594 - 16949.873: 28.5006% ( 90) 00:09:33.905 16949.873 - 17055.152: 29.6424% ( 76) 00:09:33.905 17055.152 - 17160.431: 30.8293% ( 79) 00:09:33.905 17160.431 - 17265.709: 32.0613% ( 82) 00:09:33.905 17265.709 - 17370.988: 33.3684% ( 87) 00:09:33.905 17370.988 - 17476.267: 34.5403% ( 78) 00:09:33.905 17476.267 - 17581.545: 35.7272% ( 79) 00:09:33.905 17581.545 - 17686.824: 36.9742% ( 83) 00:09:33.905 17686.824 - 17792.103: 38.4766% ( 100) 00:09:33.905 17792.103 - 17897.382: 39.7987% ( 88) 00:09:33.905 17897.382 - 18002.660: 41.1058% ( 87) 00:09:33.905 18002.660 - 18107.939: 42.1424% ( 69) 00:09:33.905 18107.939 - 18213.218: 43.0739% ( 62) 00:09:33.905 18213.218 - 18318.496: 44.2157% ( 76) 00:09:33.905 18318.496 - 18423.775: 45.5980% ( 92) 00:09:33.905 18423.775 - 18529.054: 46.8149% ( 81) 00:09:33.905 18529.054 - 18634.333: 48.0168% ( 80) 00:09:33.905 18634.333 - 18739.611: 49.2338% ( 81) 00:09:33.905 18739.611 - 18844.890: 50.1953% ( 64) 00:09:33.905 18844.890 - 18950.169: 51.1118% ( 61) 00:09:33.905 18950.169 - 19055.447: 52.4639% ( 90) 00:09:33.905 19055.447 - 19160.726: 53.4856% ( 68) 00:09:33.905 19160.726 - 19266.005: 54.3720% ( 59) 00:09:33.905 19266.005 - 19371.284: 55.3636% ( 66) 00:09:33.905 19371.284 - 19476.562: 56.5805% ( 81) 00:09:33.905 19476.562 - 19581.841: 57.7224% ( 76) 00:09:33.905 19581.841 - 19687.120: 58.9994% ( 85) 00:09:33.905 19687.120 - 19792.398: 60.1562% ( 77) 00:09:33.905 19792.398 - 19897.677: 61.4483% ( 86) 00:09:33.905 19897.677 - 20002.956: 62.6803% ( 82) 00:09:33.905 20002.956 - 20108.235: 63.8822% ( 80) 00:09:33.905 20108.235 - 20213.513: 65.3395% ( 97) 00:09:33.905 20213.513 - 20318.792: 66.3762% ( 69) 00:09:33.905 20318.792 - 20424.071: 67.3377% ( 64) 00:09:33.905 20424.071 - 20529.349: 68.3293% ( 66) 00:09:33.905 20529.349 - 20634.628: 69.2308% ( 60) 00:09:33.905 20634.628 - 20739.907: 70.0120% ( 52) 00:09:33.905 20739.907 - 20845.186: 70.7181% ( 47) 00:09:33.905 20845.186 - 20950.464: 71.5294% ( 54) 00:09:33.905 20950.464 - 21055.743: 72.3858% ( 57) 00:09:33.905 21055.743 - 21161.022: 73.1220% ( 49) 00:09:33.905 21161.022 - 21266.300: 74.0685% ( 63) 00:09:33.905 21266.300 - 21371.579: 74.9850% ( 61) 00:09:33.905 21371.579 - 21476.858: 75.7812% ( 53) 00:09:33.905 21476.858 - 21582.137: 76.7728% ( 66) 00:09:33.905 21582.137 - 21687.415: 77.9147% ( 76) 00:09:33.905 21687.415 - 21792.694: 79.4020% ( 99) 00:09:33.905 21792.694 - 21897.973: 80.4387% ( 69) 00:09:33.905 21897.973 - 22003.251: 81.2500% ( 54) 00:09:33.905 22003.251 - 22108.530: 82.1064% ( 57) 00:09:33.905 22108.530 - 22213.809: 83.0228% ( 61) 00:09:33.905 22213.809 - 22319.088: 84.0144% ( 66) 00:09:33.905 22319.088 - 22424.366: 85.1262% ( 74) 00:09:33.905 22424.366 - 22529.645: 85.8774% ( 50) 00:09:33.905 22529.645 - 22634.924: 86.7488% ( 58) 00:09:33.905 22634.924 - 22740.202: 87.6502% ( 60) 00:09:33.905 22740.202 - 22845.481: 88.4766% ( 55) 00:09:33.905 22845.481 - 22950.760: 89.1827% ( 47) 00:09:33.905 22950.760 - 23056.039: 89.7386% ( 37) 00:09:33.905 23056.039 - 23161.317: 90.1743% ( 29) 00:09:33.905 23161.317 - 23266.596: 90.5499% ( 25) 00:09:33.905 23266.596 - 23371.875: 90.9555% ( 27) 00:09:33.905 23371.875 - 23477.153: 91.3762% ( 28) 00:09:33.905 23477.153 - 23582.432: 92.0072% ( 42) 00:09:33.905 23582.432 - 23687.711: 92.3978% ( 26) 00:09:33.905 23687.711 - 23792.990: 92.7734% ( 25) 00:09:33.906 23792.990 - 23898.268: 93.1490% ( 25) 00:09:33.906 23898.268 - 24003.547: 93.5397% ( 26) 00:09:33.906 24003.547 - 24108.826: 93.8552% ( 21) 00:09:33.906 24108.826 - 24214.104: 94.1707% ( 21) 00:09:33.906 24214.104 - 24319.383: 94.4411% ( 18) 00:09:33.906 24319.383 - 24424.662: 94.6815% ( 16) 00:09:33.906 24424.662 - 24529.941: 94.8468% ( 11) 00:09:33.906 24529.941 - 24635.219: 94.9519% ( 7) 00:09:33.906 24635.219 - 24740.498: 95.0421% ( 6) 00:09:33.906 24740.498 - 24845.777: 95.1923% ( 10) 00:09:33.906 24845.777 - 24951.055: 95.2374% ( 3) 00:09:33.906 24951.055 - 25056.334: 95.3125% ( 5) 00:09:33.906 25056.334 - 25161.613: 95.3726% ( 4) 00:09:33.906 25161.613 - 25266.892: 95.4026% ( 2) 00:09:33.906 25266.892 - 25372.170: 95.4627% ( 4) 00:09:33.906 25372.170 - 25477.449: 95.4928% ( 2) 00:09:33.906 25477.449 - 25582.728: 95.5379% ( 3) 00:09:33.906 25582.728 - 25688.006: 95.5679% ( 2) 00:09:33.906 25688.006 - 25793.285: 95.6130% ( 3) 00:09:33.906 25793.285 - 25898.564: 95.6581% ( 3) 00:09:33.906 25898.564 - 26003.843: 95.7031% ( 3) 00:09:33.906 26003.843 - 26109.121: 95.7632% ( 4) 00:09:33.906 26109.121 - 26214.400: 95.8684% ( 7) 00:09:33.906 26214.400 - 26319.679: 95.9736% ( 7) 00:09:33.906 26319.679 - 26424.957: 96.0938% ( 8) 00:09:33.906 26424.957 - 26530.236: 96.2440% ( 10) 00:09:33.906 26530.236 - 26635.515: 96.3492% ( 7) 00:09:33.906 26635.515 - 26740.794: 96.4694% ( 8) 00:09:33.906 26740.794 - 26846.072: 96.6046% ( 9) 00:09:33.906 26846.072 - 26951.351: 96.7248% ( 8) 00:09:33.906 26951.351 - 27161.908: 96.9651% ( 16) 00:09:33.906 27161.908 - 27372.466: 97.1605% ( 13) 00:09:33.906 27372.466 - 27583.023: 97.3708% ( 14) 00:09:33.906 27583.023 - 27793.581: 97.5811% ( 14) 00:09:33.906 27793.581 - 28004.138: 97.7764% ( 13) 00:09:33.906 28004.138 - 28214.696: 97.9417% ( 11) 00:09:33.906 28214.696 - 28425.253: 98.0769% ( 9) 00:09:33.906 30741.385 - 30951.942: 98.3173% ( 16) 00:09:33.906 30951.942 - 31162.500: 98.4075% ( 6) 00:09:33.906 31162.500 - 31373.057: 98.4976% ( 6) 00:09:33.906 31373.057 - 31583.614: 98.5727% ( 5) 00:09:33.906 31583.614 - 31794.172: 98.6478% ( 5) 00:09:33.906 31794.172 - 32004.729: 98.7230% ( 5) 00:09:33.906 32004.729 - 32215.287: 98.7981% ( 5) 00:09:33.906 32215.287 - 32425.844: 98.8882% ( 6) 00:09:33.906 32425.844 - 32636.402: 98.9633% ( 5) 00:09:33.906 32636.402 - 32846.959: 99.0385% ( 5) 00:09:33.906 38953.124 - 39163.682: 99.1136% ( 5) 00:09:33.906 39163.682 - 39374.239: 99.1887% ( 5) 00:09:33.906 39374.239 - 39584.797: 99.2788% ( 6) 00:09:33.906 39584.797 - 39795.354: 99.3389% ( 4) 00:09:33.906 39795.354 - 40005.912: 99.4291% ( 6) 00:09:33.906 40005.912 - 40216.469: 99.5343% ( 7) 00:09:33.906 40216.469 - 40427.027: 99.6244% ( 6) 00:09:33.906 40427.027 - 40637.584: 99.7145% ( 6) 00:09:33.906 40637.584 - 40848.141: 99.8047% ( 6) 00:09:33.906 40848.141 - 41058.699: 99.8948% ( 6) 00:09:33.906 41058.699 - 41269.256: 99.9700% ( 5) 00:09:33.906 41269.256 - 41479.814: 100.0000% ( 2) 00:09:33.906 00:09:33.906 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:33.906 ============================================================================== 00:09:33.906 Range in us Cumulative IO count 00:09:33.906 12422.888 - 12475.528: 0.0751% ( 5) 00:09:33.906 12475.528 - 12528.167: 0.1953% ( 8) 00:09:33.906 12528.167 - 12580.806: 0.2704% ( 5) 00:09:33.906 12580.806 - 12633.446: 0.3456% ( 5) 00:09:33.906 12633.446 - 12686.085: 0.4207% ( 5) 00:09:33.906 12686.085 - 12738.724: 0.4958% ( 5) 00:09:33.906 12738.724 - 12791.364: 0.5559% ( 4) 00:09:33.906 12791.364 - 12844.003: 0.6010% ( 3) 00:09:33.906 12844.003 - 12896.643: 0.6310% ( 2) 00:09:33.906 12896.643 - 12949.282: 0.6611% ( 2) 00:09:33.906 12949.282 - 13001.921: 0.7061% ( 3) 00:09:33.906 13001.921 - 13054.561: 0.7512% ( 3) 00:09:33.906 13054.561 - 13107.200: 0.8263% ( 5) 00:09:33.906 13107.200 - 13159.839: 0.9315% ( 7) 00:09:33.906 13159.839 - 13212.479: 1.0667% ( 9) 00:09:33.906 13212.479 - 13265.118: 1.1569% ( 6) 00:09:33.906 13265.118 - 13317.757: 1.3221% ( 11) 00:09:33.906 13317.757 - 13370.397: 1.4573% ( 9) 00:09:33.906 13370.397 - 13423.036: 1.5925% ( 9) 00:09:33.906 13423.036 - 13475.676: 1.7278% ( 9) 00:09:33.906 13475.676 - 13580.954: 1.9832% ( 17) 00:09:33.906 13580.954 - 13686.233: 2.4189% ( 29) 00:09:33.906 13686.233 - 13791.512: 2.6292% ( 14) 00:09:33.906 13791.512 - 13896.790: 2.8696% ( 16) 00:09:33.906 13896.790 - 14002.069: 3.1250% ( 17) 00:09:33.906 14002.069 - 14107.348: 3.3804% ( 17) 00:09:33.906 14107.348 - 14212.627: 3.6809% ( 20) 00:09:33.906 14212.627 - 14317.905: 4.0264% ( 23) 00:09:33.906 14317.905 - 14423.184: 4.3870% ( 24) 00:09:33.906 14423.184 - 14528.463: 4.8978% ( 34) 00:09:33.906 14528.463 - 14633.741: 5.5288% ( 42) 00:09:33.906 14633.741 - 14739.020: 6.4904% ( 64) 00:09:33.906 14739.020 - 14844.299: 7.0312% ( 36) 00:09:33.906 14844.299 - 14949.578: 7.6773% ( 43) 00:09:33.906 14949.578 - 15054.856: 8.3534% ( 45) 00:09:33.906 15054.856 - 15160.135: 9.1647% ( 54) 00:09:33.906 15160.135 - 15265.414: 9.9910% ( 55) 00:09:33.906 15265.414 - 15370.692: 10.7722% ( 52) 00:09:33.906 15370.692 - 15475.971: 11.7037% ( 62) 00:09:33.906 15475.971 - 15581.250: 12.5150% ( 54) 00:09:33.906 15581.250 - 15686.529: 13.5066% ( 66) 00:09:33.906 15686.529 - 15791.807: 14.5583% ( 70) 00:09:33.906 15791.807 - 15897.086: 15.5048% ( 63) 00:09:33.906 15897.086 - 16002.365: 16.5415% ( 69) 00:09:33.906 16002.365 - 16107.643: 17.7284% ( 79) 00:09:33.906 16107.643 - 16212.922: 19.0355% ( 87) 00:09:33.906 16212.922 - 16318.201: 20.2374% ( 80) 00:09:33.906 16318.201 - 16423.480: 21.3792% ( 76) 00:09:33.906 16423.480 - 16528.758: 22.9117% ( 102) 00:09:33.906 16528.758 - 16634.037: 24.0835% ( 78) 00:09:33.906 16634.037 - 16739.316: 25.0751% ( 66) 00:09:33.906 16739.316 - 16844.594: 26.2320% ( 77) 00:09:33.906 16844.594 - 16949.873: 27.6292% ( 93) 00:09:33.906 16949.873 - 17055.152: 29.3269% ( 113) 00:09:33.906 17055.152 - 17160.431: 30.5889% ( 84) 00:09:33.906 17160.431 - 17265.709: 31.6707% ( 72) 00:09:33.906 17265.709 - 17370.988: 32.8425% ( 78) 00:09:33.906 17370.988 - 17476.267: 34.0745% ( 82) 00:09:33.906 17476.267 - 17581.545: 35.1863% ( 74) 00:09:33.906 17581.545 - 17686.824: 36.4032% ( 81) 00:09:33.906 17686.824 - 17792.103: 37.5451% ( 76) 00:09:33.906 17792.103 - 17897.382: 38.8972% ( 90) 00:09:33.906 17897.382 - 18002.660: 40.2494% ( 90) 00:09:33.906 18002.660 - 18107.939: 41.4213% ( 78) 00:09:33.906 18107.939 - 18213.218: 42.4279% ( 67) 00:09:33.906 18213.218 - 18318.496: 43.5998% ( 78) 00:09:33.906 18318.496 - 18423.775: 44.7266% ( 75) 00:09:33.906 18423.775 - 18529.054: 45.9135% ( 79) 00:09:33.906 18529.054 - 18634.333: 47.2806% ( 91) 00:09:33.906 18634.333 - 18739.611: 48.7380% ( 97) 00:09:33.906 18739.611 - 18844.890: 50.4056% ( 111) 00:09:33.906 18844.890 - 18950.169: 51.8780% ( 98) 00:09:33.906 18950.169 - 19055.447: 53.2151% ( 89) 00:09:33.906 19055.447 - 19160.726: 54.7025% ( 99) 00:09:33.906 19160.726 - 19266.005: 56.0998% ( 93) 00:09:33.906 19266.005 - 19371.284: 57.5871% ( 99) 00:09:33.906 19371.284 - 19476.562: 59.0895% ( 100) 00:09:33.906 19476.562 - 19581.841: 60.4718% ( 92) 00:09:33.906 19581.841 - 19687.120: 61.7338% ( 84) 00:09:33.906 19687.120 - 19792.398: 62.8906% ( 77) 00:09:33.906 19792.398 - 19897.677: 64.0024% ( 74) 00:09:33.906 19897.677 - 20002.956: 65.0541% ( 70) 00:09:33.906 20002.956 - 20108.235: 66.1058% ( 70) 00:09:33.906 20108.235 - 20213.513: 67.0523% ( 63) 00:09:33.906 20213.513 - 20318.792: 67.9387% ( 59) 00:09:33.906 20318.792 - 20424.071: 68.7200% ( 52) 00:09:33.906 20424.071 - 20529.349: 69.3510% ( 42) 00:09:33.906 20529.349 - 20634.628: 69.9069% ( 37) 00:09:33.906 20634.628 - 20739.907: 70.5980% ( 46) 00:09:33.906 20739.907 - 20845.186: 71.1989% ( 40) 00:09:33.906 20845.186 - 20950.464: 72.0553% ( 57) 00:09:33.906 20950.464 - 21055.743: 72.6562% ( 40) 00:09:33.906 21055.743 - 21161.022: 73.2722% ( 41) 00:09:33.906 21161.022 - 21266.300: 73.8582% ( 39) 00:09:33.906 21266.300 - 21371.579: 74.5192% ( 44) 00:09:33.906 21371.579 - 21476.858: 75.0901% ( 38) 00:09:33.906 21476.858 - 21582.137: 75.9465% ( 57) 00:09:33.906 21582.137 - 21687.415: 76.7879% ( 56) 00:09:33.906 21687.415 - 21792.694: 77.8546% ( 71) 00:09:33.906 21792.694 - 21897.973: 78.6058% ( 50) 00:09:33.906 21897.973 - 22003.251: 79.4772% ( 58) 00:09:33.906 22003.251 - 22108.530: 80.4087% ( 62) 00:09:33.906 22108.530 - 22213.809: 81.3552% ( 63) 00:09:33.906 22213.809 - 22319.088: 82.5120% ( 77) 00:09:33.906 22319.088 - 22424.366: 83.3834% ( 58) 00:09:33.906 22424.366 - 22529.645: 84.1797% ( 53) 00:09:33.906 22529.645 - 22634.924: 85.3365% ( 77) 00:09:33.906 22634.924 - 22740.202: 86.2079% ( 58) 00:09:33.906 22740.202 - 22845.481: 86.9591% ( 50) 00:09:33.906 22845.481 - 22950.760: 87.6052% ( 43) 00:09:33.906 22950.760 - 23056.039: 88.4315% ( 55) 00:09:33.906 23056.039 - 23161.317: 89.1677% ( 49) 00:09:33.906 23161.317 - 23266.596: 89.8137% ( 43) 00:09:33.906 23266.596 - 23371.875: 90.3395% ( 35) 00:09:33.906 23371.875 - 23477.153: 90.8353% ( 33) 00:09:33.906 23477.153 - 23582.432: 91.1358% ( 20) 00:09:33.906 23582.432 - 23687.711: 91.5114% ( 25) 00:09:33.906 23687.711 - 23792.990: 91.9621% ( 30) 00:09:33.907 23792.990 - 23898.268: 92.4129% ( 30) 00:09:33.907 23898.268 - 24003.547: 93.1340% ( 48) 00:09:33.907 24003.547 - 24108.826: 93.5847% ( 30) 00:09:33.907 24108.826 - 24214.104: 93.9603% ( 25) 00:09:33.907 24214.104 - 24319.383: 94.3359% ( 25) 00:09:33.907 24319.383 - 24424.662: 94.5763% ( 16) 00:09:33.907 24424.662 - 24529.941: 94.7716% ( 13) 00:09:33.907 24529.941 - 24635.219: 95.1022% ( 22) 00:09:33.907 24635.219 - 24740.498: 95.3425% ( 16) 00:09:33.907 24740.498 - 24845.777: 95.5829% ( 16) 00:09:33.907 24845.777 - 24951.055: 95.8233% ( 16) 00:09:33.907 24951.055 - 25056.334: 95.9886% ( 11) 00:09:33.907 25056.334 - 25161.613: 96.1689% ( 12) 00:09:33.907 25161.613 - 25266.892: 96.3642% ( 13) 00:09:33.907 25266.892 - 25372.170: 96.4994% ( 9) 00:09:33.907 25372.170 - 25477.449: 96.6496% ( 10) 00:09:33.907 25477.449 - 25582.728: 96.8299% ( 12) 00:09:33.907 25582.728 - 25688.006: 97.0252% ( 13) 00:09:33.907 25688.006 - 25793.285: 97.4008% ( 25) 00:09:33.907 25793.285 - 25898.564: 97.5060% ( 7) 00:09:33.907 25898.564 - 26003.843: 97.6112% ( 7) 00:09:33.907 26003.843 - 26109.121: 97.7163% ( 7) 00:09:33.907 26109.121 - 26214.400: 97.8215% ( 7) 00:09:33.907 26214.400 - 26319.679: 97.9267% ( 7) 00:09:33.907 26319.679 - 26424.957: 98.0168% ( 6) 00:09:33.907 26424.957 - 26530.236: 98.0769% ( 4) 00:09:33.907 28635.810 - 28846.368: 98.2422% ( 11) 00:09:33.907 28846.368 - 29056.925: 98.3173% ( 5) 00:09:33.907 29056.925 - 29267.483: 98.3924% ( 5) 00:09:33.907 29267.483 - 29478.040: 98.4675% ( 5) 00:09:33.907 29478.040 - 29688.598: 98.5577% ( 6) 00:09:33.907 29688.598 - 29899.155: 98.6328% ( 5) 00:09:33.907 29899.155 - 30109.712: 98.7079% ( 5) 00:09:33.907 30109.712 - 30320.270: 98.7981% ( 6) 00:09:33.907 30320.270 - 30530.827: 98.8882% ( 6) 00:09:33.907 30530.827 - 30741.385: 98.9784% ( 6) 00:09:33.907 30741.385 - 30951.942: 99.0385% ( 4) 00:09:33.907 37058.108 - 37268.665: 99.0535% ( 1) 00:09:33.907 37268.665 - 37479.222: 99.1436% ( 6) 00:09:33.907 37479.222 - 37689.780: 99.2488% ( 7) 00:09:33.907 37689.780 - 37900.337: 99.3239% ( 5) 00:09:33.907 37900.337 - 38110.895: 99.3990% ( 5) 00:09:33.907 38110.895 - 38321.452: 99.4892% ( 6) 00:09:33.907 38321.452 - 38532.010: 99.5643% ( 5) 00:09:33.907 38532.010 - 38742.567: 99.6544% ( 6) 00:09:33.907 38742.567 - 38953.124: 99.7446% ( 6) 00:09:33.907 38953.124 - 39163.682: 99.8347% ( 6) 00:09:33.907 39163.682 - 39374.239: 99.9099% ( 5) 00:09:33.907 39374.239 - 39584.797: 100.0000% ( 6) 00:09:33.907 00:09:33.907 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:33.907 ============================================================================== 00:09:33.907 Range in us Cumulative IO count 00:09:33.907 12212.331 - 12264.970: 0.0751% ( 5) 00:09:33.907 12264.970 - 12317.610: 0.1502% ( 5) 00:09:33.907 12317.610 - 12370.249: 0.1953% ( 3) 00:09:33.907 12370.249 - 12422.888: 0.2704% ( 5) 00:09:33.907 12422.888 - 12475.528: 0.3305% ( 4) 00:09:33.907 12475.528 - 12528.167: 0.3906% ( 4) 00:09:33.907 12528.167 - 12580.806: 0.4507% ( 4) 00:09:33.907 12580.806 - 12633.446: 0.5258% ( 5) 00:09:33.907 12633.446 - 12686.085: 0.5559% ( 2) 00:09:33.907 12686.085 - 12738.724: 0.6010% ( 3) 00:09:33.907 12738.724 - 12791.364: 0.6460% ( 3) 00:09:33.907 12791.364 - 12844.003: 0.6911% ( 3) 00:09:33.907 12844.003 - 12896.643: 0.7512% ( 4) 00:09:33.907 12896.643 - 12949.282: 0.8113% ( 4) 00:09:33.907 12949.282 - 13001.921: 0.8714% ( 4) 00:09:33.907 13001.921 - 13054.561: 0.9615% ( 6) 00:09:33.907 13054.561 - 13107.200: 1.0667% ( 7) 00:09:33.907 13107.200 - 13159.839: 1.1719% ( 7) 00:09:33.907 13159.839 - 13212.479: 1.2169% ( 3) 00:09:33.907 13212.479 - 13265.118: 1.2770% ( 4) 00:09:33.907 13265.118 - 13317.757: 1.3672% ( 6) 00:09:33.907 13317.757 - 13370.397: 1.5775% ( 14) 00:09:33.907 13370.397 - 13423.036: 1.6226% ( 3) 00:09:33.907 13423.036 - 13475.676: 1.6827% ( 4) 00:09:33.907 13475.676 - 13580.954: 1.9081% ( 15) 00:09:33.907 13580.954 - 13686.233: 2.1184% ( 14) 00:09:33.907 13686.233 - 13791.512: 2.4489% ( 22) 00:09:33.907 13791.512 - 13896.790: 2.7344% ( 19) 00:09:33.907 13896.790 - 14002.069: 2.9898% ( 17) 00:09:33.907 14002.069 - 14107.348: 3.3654% ( 25) 00:09:33.907 14107.348 - 14212.627: 3.7109% ( 23) 00:09:33.907 14212.627 - 14317.905: 4.2067% ( 33) 00:09:33.907 14317.905 - 14423.184: 4.4922% ( 19) 00:09:33.907 14423.184 - 14528.463: 4.9129% ( 28) 00:09:33.907 14528.463 - 14633.741: 5.5288% ( 41) 00:09:33.907 14633.741 - 14739.020: 6.0096% ( 32) 00:09:33.907 14739.020 - 14844.299: 6.5956% ( 39) 00:09:33.907 14844.299 - 14949.578: 7.0463% ( 30) 00:09:33.907 14949.578 - 15054.856: 7.6172% ( 38) 00:09:33.907 15054.856 - 15160.135: 8.2031% ( 39) 00:09:33.907 15160.135 - 15265.414: 8.7590% ( 37) 00:09:33.907 15265.414 - 15370.692: 9.4651% ( 47) 00:09:33.907 15370.692 - 15475.971: 10.2614% ( 53) 00:09:33.907 15475.971 - 15581.250: 11.3732% ( 74) 00:09:33.907 15581.250 - 15686.529: 12.7254% ( 90) 00:09:33.907 15686.529 - 15791.807: 13.8822% ( 77) 00:09:33.907 15791.807 - 15897.086: 15.5799% ( 113) 00:09:33.907 15897.086 - 16002.365: 17.1575% ( 105) 00:09:33.907 16002.365 - 16107.643: 18.7200% ( 104) 00:09:33.907 16107.643 - 16212.922: 20.0871% ( 91) 00:09:33.907 16212.922 - 16318.201: 21.7097% ( 108) 00:09:33.907 16318.201 - 16423.480: 22.8065% ( 73) 00:09:33.907 16423.480 - 16528.758: 24.1587% ( 90) 00:09:33.907 16528.758 - 16634.037: 25.3756% ( 81) 00:09:33.907 16634.037 - 16739.316: 26.5475% ( 78) 00:09:33.907 16739.316 - 16844.594: 27.7344% ( 79) 00:09:33.907 16844.594 - 16949.873: 28.8161% ( 72) 00:09:33.907 16949.873 - 17055.152: 30.2284% ( 94) 00:09:33.907 17055.152 - 17160.431: 31.3401% ( 74) 00:09:33.907 17160.431 - 17265.709: 32.3618% ( 68) 00:09:33.907 17265.709 - 17370.988: 33.4585% ( 73) 00:09:33.907 17370.988 - 17476.267: 34.4802% ( 68) 00:09:33.907 17476.267 - 17581.545: 35.3816% ( 60) 00:09:33.907 17581.545 - 17686.824: 36.4483% ( 71) 00:09:33.907 17686.824 - 17792.103: 37.5300% ( 72) 00:09:33.907 17792.103 - 17897.382: 38.5517% ( 68) 00:09:33.907 17897.382 - 18002.660: 39.5282% ( 65) 00:09:33.907 18002.660 - 18107.939: 40.6701% ( 76) 00:09:33.907 18107.939 - 18213.218: 41.8870% ( 81) 00:09:33.907 18213.218 - 18318.496: 42.7133% ( 55) 00:09:33.907 18318.496 - 18423.775: 43.7350% ( 68) 00:09:33.907 18423.775 - 18529.054: 44.6665% ( 62) 00:09:33.907 18529.054 - 18634.333: 45.5980% ( 62) 00:09:33.907 18634.333 - 18739.611: 46.4844% ( 59) 00:09:33.907 18739.611 - 18844.890: 47.6112% ( 75) 00:09:33.907 18844.890 - 18950.169: 48.8582% ( 83) 00:09:33.907 18950.169 - 19055.447: 50.1953% ( 89) 00:09:33.907 19055.447 - 19160.726: 51.5625% ( 91) 00:09:33.907 19160.726 - 19266.005: 53.0799% ( 101) 00:09:33.907 19266.005 - 19371.284: 54.5974% ( 101) 00:09:33.907 19371.284 - 19476.562: 56.0096% ( 94) 00:09:33.907 19476.562 - 19581.841: 56.9862% ( 65) 00:09:33.907 19581.841 - 19687.120: 58.1130% ( 75) 00:09:33.907 19687.120 - 19792.398: 59.4501% ( 89) 00:09:33.907 19792.398 - 19897.677: 60.8323% ( 92) 00:09:33.907 19897.677 - 20002.956: 62.2446% ( 94) 00:09:33.907 20002.956 - 20108.235: 63.4916% ( 83) 00:09:33.907 20108.235 - 20213.513: 64.6334% ( 76) 00:09:33.907 20213.513 - 20318.792: 65.8804% ( 83) 00:09:33.907 20318.792 - 20424.071: 67.1124% ( 82) 00:09:33.907 20424.071 - 20529.349: 68.2993% ( 79) 00:09:33.907 20529.349 - 20634.628: 69.5913% ( 86) 00:09:33.907 20634.628 - 20739.907: 70.7181% ( 75) 00:09:33.907 20739.907 - 20845.186: 71.7698% ( 70) 00:09:33.907 20845.186 - 20950.464: 72.6262% ( 57) 00:09:33.907 20950.464 - 21055.743: 73.6028% ( 65) 00:09:33.907 21055.743 - 21161.022: 74.6394% ( 69) 00:09:33.907 21161.022 - 21266.300: 75.3005% ( 44) 00:09:33.907 21266.300 - 21371.579: 75.9465% ( 43) 00:09:33.907 21371.579 - 21476.858: 76.6677% ( 48) 00:09:33.907 21476.858 - 21582.137: 77.3588% ( 46) 00:09:33.907 21582.137 - 21687.415: 78.1100% ( 50) 00:09:33.907 21687.415 - 21792.694: 78.9363% ( 55) 00:09:33.907 21792.694 - 21897.973: 79.8077% ( 58) 00:09:33.907 21897.973 - 22003.251: 80.7242% ( 61) 00:09:33.907 22003.251 - 22108.530: 81.9261% ( 80) 00:09:33.907 22108.530 - 22213.809: 82.9177% ( 66) 00:09:33.907 22213.809 - 22319.088: 83.8942% ( 65) 00:09:33.907 22319.088 - 22424.366: 84.7206% ( 55) 00:09:33.907 22424.366 - 22529.645: 85.6671% ( 63) 00:09:33.907 22529.645 - 22634.924: 86.4934% ( 55) 00:09:33.907 22634.924 - 22740.202: 87.3498% ( 57) 00:09:33.907 22740.202 - 22845.481: 88.0709% ( 48) 00:09:33.907 22845.481 - 22950.760: 88.9273% ( 57) 00:09:33.907 22950.760 - 23056.039: 89.5132% ( 39) 00:09:33.907 23056.039 - 23161.317: 90.0541% ( 36) 00:09:33.907 23161.317 - 23266.596: 90.5499% ( 33) 00:09:33.907 23266.596 - 23371.875: 91.0306% ( 32) 00:09:33.907 23371.875 - 23477.153: 91.5565% ( 35) 00:09:33.907 23477.153 - 23582.432: 92.0373% ( 32) 00:09:33.907 23582.432 - 23687.711: 92.5781% ( 36) 00:09:33.907 23687.711 - 23792.990: 93.2542% ( 45) 00:09:33.907 23792.990 - 23898.268: 93.7350% ( 32) 00:09:33.907 23898.268 - 24003.547: 94.1256% ( 26) 00:09:33.907 24003.547 - 24108.826: 94.4111% ( 19) 00:09:33.907 24108.826 - 24214.104: 94.6514% ( 16) 00:09:33.907 24214.104 - 24319.383: 94.8918% ( 16) 00:09:33.907 24319.383 - 24424.662: 95.0421% ( 10) 00:09:33.907 24424.662 - 24529.941: 95.2374% ( 13) 00:09:33.907 24529.941 - 24635.219: 95.4928% ( 17) 00:09:33.907 24635.219 - 24740.498: 95.6130% ( 8) 00:09:33.907 24740.498 - 24845.777: 95.7933% ( 12) 00:09:33.908 24845.777 - 24951.055: 95.9736% ( 12) 00:09:33.908 24951.055 - 25056.334: 96.1538% ( 12) 00:09:33.908 25056.334 - 25161.613: 96.3341% ( 12) 00:09:33.908 25161.613 - 25266.892: 96.4994% ( 11) 00:09:33.908 25266.892 - 25372.170: 96.6346% ( 9) 00:09:33.908 25372.170 - 25477.449: 96.7698% ( 9) 00:09:33.908 25477.449 - 25582.728: 96.9501% ( 12) 00:09:33.908 25582.728 - 25688.006: 97.1304% ( 12) 00:09:33.908 25688.006 - 25793.285: 97.2957% ( 11) 00:09:33.908 25793.285 - 25898.564: 97.4760% ( 12) 00:09:33.908 25898.564 - 26003.843: 97.6412% ( 11) 00:09:33.908 26003.843 - 26109.121: 97.7764% ( 9) 00:09:33.908 26109.121 - 26214.400: 97.9117% ( 9) 00:09:33.908 26214.400 - 26319.679: 98.0168% ( 7) 00:09:33.908 26319.679 - 26424.957: 98.1671% ( 10) 00:09:33.908 26424.957 - 26530.236: 98.3023% ( 9) 00:09:33.908 26530.236 - 26635.515: 98.3774% ( 5) 00:09:33.908 26635.515 - 26740.794: 98.4675% ( 6) 00:09:33.908 26740.794 - 26846.072: 98.5727% ( 7) 00:09:33.908 26846.072 - 26951.351: 98.6328% ( 4) 00:09:33.908 26951.351 - 27161.908: 98.7079% ( 5) 00:09:33.908 27161.908 - 27372.466: 98.7831% ( 5) 00:09:33.908 27372.466 - 27583.023: 98.8582% ( 5) 00:09:33.908 27583.023 - 27793.581: 98.9483% ( 6) 00:09:33.908 27793.581 - 28004.138: 99.0385% ( 6) 00:09:33.908 34320.861 - 34531.418: 99.0535% ( 1) 00:09:33.908 34531.418 - 34741.976: 99.1436% ( 6) 00:09:33.908 34741.976 - 34952.533: 99.2188% ( 5) 00:09:33.908 34952.533 - 35163.091: 99.2939% ( 5) 00:09:33.908 35163.091 - 35373.648: 99.3840% ( 6) 00:09:33.908 35373.648 - 35584.206: 99.4742% ( 6) 00:09:33.908 35584.206 - 35794.763: 99.5493% ( 5) 00:09:33.908 35794.763 - 36005.320: 99.6394% ( 6) 00:09:33.908 36005.320 - 36215.878: 99.7296% ( 6) 00:09:33.908 36215.878 - 36426.435: 99.8347% ( 7) 00:09:33.908 36426.435 - 36636.993: 99.9249% ( 6) 00:09:33.908 36636.993 - 36847.550: 100.0000% ( 5) 00:09:33.908 00:09:33.908 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:33.908 ============================================================================== 00:09:33.908 Range in us Cumulative IO count 00:09:33.908 12370.249 - 12422.888: 0.0901% ( 6) 00:09:33.908 12422.888 - 12475.528: 0.1803% ( 6) 00:09:33.908 12475.528 - 12528.167: 0.2855% ( 7) 00:09:33.908 12528.167 - 12580.806: 0.3906% ( 7) 00:09:33.908 12580.806 - 12633.446: 0.4808% ( 6) 00:09:33.908 12633.446 - 12686.085: 0.6010% ( 8) 00:09:33.908 12686.085 - 12738.724: 0.6460% ( 3) 00:09:33.908 12738.724 - 12791.364: 0.6761% ( 2) 00:09:33.908 12791.364 - 12844.003: 0.7662% ( 6) 00:09:33.908 12844.003 - 12896.643: 0.8413% ( 5) 00:09:33.908 12896.643 - 12949.282: 0.9014% ( 4) 00:09:33.908 12949.282 - 13001.921: 0.9615% ( 4) 00:09:33.908 13001.921 - 13054.561: 1.0968% ( 9) 00:09:33.908 13054.561 - 13107.200: 1.2320% ( 9) 00:09:33.908 13107.200 - 13159.839: 1.3522% ( 8) 00:09:33.908 13159.839 - 13212.479: 1.5174% ( 11) 00:09:33.908 13212.479 - 13265.118: 1.6226% ( 7) 00:09:33.908 13265.118 - 13317.757: 1.6677% ( 3) 00:09:33.908 13317.757 - 13370.397: 1.7278% ( 4) 00:09:33.908 13370.397 - 13423.036: 1.7728% ( 3) 00:09:33.908 13423.036 - 13475.676: 1.8329% ( 4) 00:09:33.908 13475.676 - 13580.954: 1.9231% ( 6) 00:09:33.908 13580.954 - 13686.233: 2.0282% ( 7) 00:09:33.908 13686.233 - 13791.512: 2.1635% ( 9) 00:09:33.908 13791.512 - 13896.790: 2.4038% ( 16) 00:09:33.908 13896.790 - 14002.069: 2.6292% ( 15) 00:09:33.908 14002.069 - 14107.348: 3.2001% ( 38) 00:09:33.908 14107.348 - 14212.627: 3.7861% ( 39) 00:09:33.908 14212.627 - 14317.905: 4.3419% ( 37) 00:09:33.908 14317.905 - 14423.184: 4.8528% ( 34) 00:09:33.908 14423.184 - 14528.463: 5.2734% ( 28) 00:09:33.908 14528.463 - 14633.741: 5.7091% ( 29) 00:09:33.908 14633.741 - 14739.020: 6.1298% ( 28) 00:09:33.908 14739.020 - 14844.299: 6.6556% ( 35) 00:09:33.908 14844.299 - 14949.578: 7.1815% ( 35) 00:09:33.908 14949.578 - 15054.856: 7.8576% ( 45) 00:09:33.908 15054.856 - 15160.135: 8.5637% ( 47) 00:09:33.908 15160.135 - 15265.414: 9.1496% ( 39) 00:09:33.908 15265.414 - 15370.692: 10.0811% ( 62) 00:09:33.908 15370.692 - 15475.971: 11.0577% ( 65) 00:09:33.908 15475.971 - 15581.250: 12.0793% ( 68) 00:09:33.908 15581.250 - 15686.529: 13.1761% ( 73) 00:09:33.908 15686.529 - 15791.807: 14.6034% ( 95) 00:09:33.908 15791.807 - 15897.086: 15.7752% ( 78) 00:09:33.908 15897.086 - 16002.365: 17.0974% ( 88) 00:09:33.908 16002.365 - 16107.643: 18.4946% ( 93) 00:09:33.908 16107.643 - 16212.922: 19.9519% ( 97) 00:09:33.908 16212.922 - 16318.201: 21.4093% ( 97) 00:09:33.908 16318.201 - 16423.480: 22.8666% ( 97) 00:09:33.908 16423.480 - 16528.758: 24.4141% ( 103) 00:09:33.908 16528.758 - 16634.037: 25.9766% ( 104) 00:09:33.908 16634.037 - 16739.316: 27.4038% ( 95) 00:09:33.908 16739.316 - 16844.594: 28.5156% ( 74) 00:09:33.908 16844.594 - 16949.873: 29.4772% ( 64) 00:09:33.908 16949.873 - 17055.152: 30.3486% ( 58) 00:09:33.908 17055.152 - 17160.431: 31.4002% ( 70) 00:09:33.908 17160.431 - 17265.709: 32.4069% ( 67) 00:09:33.908 17265.709 - 17370.988: 33.5637% ( 77) 00:09:33.908 17370.988 - 17476.267: 34.9008% ( 89) 00:09:33.908 17476.267 - 17581.545: 35.9826% ( 72) 00:09:33.908 17581.545 - 17686.824: 37.0042% ( 68) 00:09:33.908 17686.824 - 17792.103: 38.0409% ( 69) 00:09:33.908 17792.103 - 17897.382: 39.0775% ( 69) 00:09:33.908 17897.382 - 18002.660: 39.9790% ( 60) 00:09:33.908 18002.660 - 18107.939: 40.7752% ( 53) 00:09:33.908 18107.939 - 18213.218: 41.6767% ( 60) 00:09:33.908 18213.218 - 18318.496: 42.6983% ( 68) 00:09:33.908 18318.496 - 18423.775: 43.7200% ( 68) 00:09:33.908 18423.775 - 18529.054: 44.8317% ( 74) 00:09:33.908 18529.054 - 18634.333: 45.8383% ( 67) 00:09:33.908 18634.333 - 18739.611: 46.7398% ( 60) 00:09:33.908 18739.611 - 18844.890: 47.5210% ( 52) 00:09:33.908 18844.890 - 18950.169: 48.3774% ( 57) 00:09:33.908 18950.169 - 19055.447: 49.3089% ( 62) 00:09:33.908 19055.447 - 19160.726: 50.4507% ( 76) 00:09:33.908 19160.726 - 19266.005: 51.8930% ( 96) 00:09:33.908 19266.005 - 19371.284: 53.2302% ( 89) 00:09:33.908 19371.284 - 19476.562: 54.6274% ( 93) 00:09:33.908 19476.562 - 19581.841: 56.2350% ( 107) 00:09:33.908 19581.841 - 19687.120: 57.6472% ( 94) 00:09:33.908 19687.120 - 19792.398: 58.9844% ( 89) 00:09:33.908 19792.398 - 19897.677: 60.4868% ( 100) 00:09:33.908 19897.677 - 20002.956: 61.8990% ( 94) 00:09:33.908 20002.956 - 20108.235: 63.4916% ( 106) 00:09:33.908 20108.235 - 20213.513: 65.1142% ( 108) 00:09:33.908 20213.513 - 20318.792: 66.2560% ( 76) 00:09:33.908 20318.792 - 20424.071: 67.5030% ( 83) 00:09:33.908 20424.071 - 20529.349: 68.4946% ( 66) 00:09:33.908 20529.349 - 20634.628: 69.6214% ( 75) 00:09:33.908 20634.628 - 20739.907: 70.5980% ( 65) 00:09:33.908 20739.907 - 20845.186: 71.7097% ( 74) 00:09:33.908 20845.186 - 20950.464: 72.7163% ( 67) 00:09:33.908 20950.464 - 21055.743: 73.7079% ( 66) 00:09:33.908 21055.743 - 21161.022: 74.7296% ( 68) 00:09:33.908 21161.022 - 21266.300: 75.7812% ( 70) 00:09:33.908 21266.300 - 21371.579: 76.8930% ( 74) 00:09:33.908 21371.579 - 21476.858: 77.8546% ( 64) 00:09:33.908 21476.858 - 21582.137: 78.9363% ( 72) 00:09:33.908 21582.137 - 21687.415: 79.8227% ( 59) 00:09:33.908 21687.415 - 21792.694: 80.6040% ( 52) 00:09:33.908 21792.694 - 21897.973: 81.4153% ( 54) 00:09:33.908 21897.973 - 22003.251: 81.9712% ( 37) 00:09:33.908 22003.251 - 22108.530: 82.5571% ( 39) 00:09:33.908 22108.530 - 22213.809: 83.1280% ( 38) 00:09:33.908 22213.809 - 22319.088: 83.7590% ( 42) 00:09:33.908 22319.088 - 22424.366: 84.3900% ( 42) 00:09:33.908 22424.366 - 22529.645: 85.0210% ( 42) 00:09:33.908 22529.645 - 22634.924: 85.9225% ( 60) 00:09:33.908 22634.924 - 22740.202: 86.6737% ( 50) 00:09:33.908 22740.202 - 22845.481: 87.3347% ( 44) 00:09:33.908 22845.481 - 22950.760: 87.9207% ( 39) 00:09:33.908 22950.760 - 23056.039: 88.5968% ( 45) 00:09:33.909 23056.039 - 23161.317: 89.2428% ( 43) 00:09:33.909 23161.317 - 23266.596: 89.7686% ( 35) 00:09:33.909 23266.596 - 23371.875: 90.2043% ( 29) 00:09:33.909 23371.875 - 23477.153: 90.6550% ( 30) 00:09:33.909 23477.153 - 23582.432: 91.0907% ( 29) 00:09:33.909 23582.432 - 23687.711: 91.7067% ( 41) 00:09:33.909 23687.711 - 23792.990: 92.1124% ( 27) 00:09:33.909 23792.990 - 23898.268: 92.5030% ( 26) 00:09:33.909 23898.268 - 24003.547: 92.9688% ( 31) 00:09:33.909 24003.547 - 24108.826: 93.4044% ( 29) 00:09:33.909 24108.826 - 24214.104: 94.0355% ( 42) 00:09:33.909 24214.104 - 24319.383: 94.3209% ( 19) 00:09:33.909 24319.383 - 24424.662: 94.6364% ( 21) 00:09:33.909 24424.662 - 24529.941: 94.9519% ( 21) 00:09:33.909 24529.941 - 24635.219: 95.2975% ( 23) 00:09:33.909 24635.219 - 24740.498: 95.4928% ( 13) 00:09:33.909 24740.498 - 24845.777: 95.7332% ( 16) 00:09:33.909 24845.777 - 24951.055: 96.0036% ( 18) 00:09:33.909 24951.055 - 25056.334: 96.3642% ( 24) 00:09:33.909 25056.334 - 25161.613: 96.5895% ( 15) 00:09:33.909 25161.613 - 25266.892: 96.8600% ( 18) 00:09:33.909 25266.892 - 25372.170: 97.0703% ( 14) 00:09:33.909 25372.170 - 25477.449: 97.3107% ( 16) 00:09:33.909 25477.449 - 25582.728: 97.5511% ( 16) 00:09:33.909 25582.728 - 25688.006: 97.7464% ( 13) 00:09:33.909 25688.006 - 25793.285: 97.8966% ( 10) 00:09:33.909 25793.285 - 25898.564: 98.0619% ( 11) 00:09:33.909 25898.564 - 26003.843: 98.1971% ( 9) 00:09:33.909 26003.843 - 26109.121: 98.3173% ( 8) 00:09:33.909 26109.121 - 26214.400: 98.4225% ( 7) 00:09:33.909 26214.400 - 26319.679: 98.5276% ( 7) 00:09:33.909 26319.679 - 26424.957: 98.6178% ( 6) 00:09:33.909 26424.957 - 26530.236: 98.7079% ( 6) 00:09:33.909 26530.236 - 26635.515: 98.8131% ( 7) 00:09:33.909 26635.515 - 26740.794: 98.8882% ( 5) 00:09:33.909 26740.794 - 26846.072: 98.9333% ( 3) 00:09:33.909 26846.072 - 26951.351: 98.9784% ( 3) 00:09:33.909 26951.351 - 27161.908: 99.0385% ( 4) 00:09:33.909 29899.155 - 30109.712: 99.2488% ( 14) 00:09:33.909 30109.712 - 30320.270: 99.3239% ( 5) 00:09:33.909 31794.172 - 32004.729: 99.3540% ( 2) 00:09:33.909 32004.729 - 32215.287: 99.4291% ( 5) 00:09:33.909 32215.287 - 32425.844: 99.5192% ( 6) 00:09:33.909 32425.844 - 32636.402: 99.5793% ( 4) 00:09:33.909 32636.402 - 32846.959: 99.6544% ( 5) 00:09:33.909 32846.959 - 33057.516: 99.7296% ( 5) 00:09:33.909 33057.516 - 33268.074: 99.8197% ( 6) 00:09:33.909 33268.074 - 33478.631: 99.8948% ( 5) 00:09:33.909 33478.631 - 33689.189: 99.9850% ( 6) 00:09:33.909 33689.189 - 33899.746: 100.0000% ( 1) 00:09:33.909 00:09:33.909 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:33.909 ============================================================================== 00:09:33.909 Range in us Cumulative IO count 00:09:33.909 12738.724 - 12791.364: 0.0150% ( 1) 00:09:33.909 12844.003 - 12896.643: 0.0451% ( 2) 00:09:33.909 12896.643 - 12949.282: 0.1653% ( 8) 00:09:33.909 12949.282 - 13001.921: 0.2554% ( 6) 00:09:33.909 13001.921 - 13054.561: 0.3606% ( 7) 00:09:33.909 13054.561 - 13107.200: 0.4958% ( 9) 00:09:33.909 13107.200 - 13159.839: 0.7362% ( 16) 00:09:33.909 13159.839 - 13212.479: 0.8714% ( 9) 00:09:33.909 13212.479 - 13265.118: 0.9766% ( 7) 00:09:33.909 13265.118 - 13317.757: 1.0968% ( 8) 00:09:33.909 13317.757 - 13370.397: 1.2770% ( 12) 00:09:33.909 13370.397 - 13423.036: 1.5775% ( 20) 00:09:33.909 13423.036 - 13475.676: 1.7578% ( 12) 00:09:33.909 13475.676 - 13580.954: 1.9832% ( 15) 00:09:33.909 13580.954 - 13686.233: 2.2386% ( 17) 00:09:33.909 13686.233 - 13791.512: 2.4339% ( 13) 00:09:33.909 13791.512 - 13896.790: 2.7194% ( 19) 00:09:33.909 13896.790 - 14002.069: 2.9898% ( 18) 00:09:33.909 14002.069 - 14107.348: 3.2452% ( 17) 00:09:33.909 14107.348 - 14212.627: 3.5006% ( 17) 00:09:33.909 14212.627 - 14317.905: 3.8612% ( 24) 00:09:33.909 14317.905 - 14423.184: 4.0715% ( 14) 00:09:33.909 14423.184 - 14528.463: 4.3269% ( 17) 00:09:33.909 14528.463 - 14633.741: 4.5823% ( 17) 00:09:33.909 14633.741 - 14739.020: 4.9579% ( 25) 00:09:33.909 14739.020 - 14844.299: 5.2734% ( 21) 00:09:33.909 14844.299 - 14949.578: 5.8143% ( 36) 00:09:33.909 14949.578 - 15054.856: 6.3401% ( 35) 00:09:33.909 15054.856 - 15160.135: 7.3167% ( 65) 00:09:33.909 15160.135 - 15265.414: 8.0980% ( 52) 00:09:33.909 15265.414 - 15370.692: 9.2248% ( 75) 00:09:33.909 15370.692 - 15475.971: 9.9760% ( 50) 00:09:33.909 15475.971 - 15581.250: 11.0276% ( 70) 00:09:33.909 15581.250 - 15686.529: 12.6502% ( 108) 00:09:33.909 15686.529 - 15791.807: 13.9573% ( 87) 00:09:33.909 15791.807 - 15897.086: 15.2043% ( 83) 00:09:33.909 15897.086 - 16002.365: 16.6316% ( 95) 00:09:33.909 16002.365 - 16107.643: 17.9387% ( 87) 00:09:33.909 16107.643 - 16212.922: 19.3510% ( 94) 00:09:33.909 16212.922 - 16318.201: 20.7782% ( 95) 00:09:33.909 16318.201 - 16423.480: 22.2506% ( 98) 00:09:33.909 16423.480 - 16528.758: 23.5126% ( 84) 00:09:33.909 16528.758 - 16634.037: 25.0300% ( 101) 00:09:33.909 16634.037 - 16739.316: 26.3221% ( 86) 00:09:33.909 16739.316 - 16844.594: 27.7344% ( 94) 00:09:33.909 16844.594 - 16949.873: 28.8762% ( 76) 00:09:33.909 16949.873 - 17055.152: 29.9129% ( 69) 00:09:33.909 17055.152 - 17160.431: 30.8744% ( 64) 00:09:33.909 17160.431 - 17265.709: 31.9111% ( 69) 00:09:33.909 17265.709 - 17370.988: 32.9928% ( 72) 00:09:33.909 17370.988 - 17476.267: 33.8191% ( 55) 00:09:33.909 17476.267 - 17581.545: 34.6304% ( 54) 00:09:33.909 17581.545 - 17686.824: 35.3966% ( 51) 00:09:33.909 17686.824 - 17792.103: 36.2981% ( 60) 00:09:33.909 17792.103 - 17897.382: 37.1244% ( 55) 00:09:33.909 17897.382 - 18002.660: 38.0108% ( 59) 00:09:33.909 18002.660 - 18107.939: 38.9123% ( 60) 00:09:33.909 18107.939 - 18213.218: 39.6935% ( 52) 00:09:33.909 18213.218 - 18318.496: 40.5499% ( 57) 00:09:33.909 18318.496 - 18423.775: 41.6466% ( 73) 00:09:33.909 18423.775 - 18529.054: 42.7434% ( 73) 00:09:33.909 18529.054 - 18634.333: 44.0805% ( 89) 00:09:33.909 18634.333 - 18739.611: 45.3275% ( 83) 00:09:33.909 18739.611 - 18844.890: 46.6496% ( 88) 00:09:33.909 18844.890 - 18950.169: 48.1370% ( 99) 00:09:33.909 18950.169 - 19055.447: 49.2338% ( 73) 00:09:33.909 19055.447 - 19160.726: 50.6460% ( 94) 00:09:33.909 19160.726 - 19266.005: 52.4790% ( 122) 00:09:33.909 19266.005 - 19371.284: 54.2518% ( 118) 00:09:33.909 19371.284 - 19476.562: 56.2350% ( 132) 00:09:33.909 19476.562 - 19581.841: 57.8876% ( 110) 00:09:33.909 19581.841 - 19687.120: 59.7806% ( 126) 00:09:33.909 19687.120 - 19792.398: 61.2380% ( 97) 00:09:33.909 19792.398 - 19897.677: 62.5751% ( 89) 00:09:33.909 19897.677 - 20002.956: 64.0024% ( 95) 00:09:33.909 20002.956 - 20108.235: 65.3245% ( 88) 00:09:33.909 20108.235 - 20213.513: 66.2710% ( 63) 00:09:33.909 20213.513 - 20318.792: 67.0673% ( 53) 00:09:33.909 20318.792 - 20424.071: 67.9838% ( 61) 00:09:33.909 20424.071 - 20529.349: 68.7951% ( 54) 00:09:33.909 20529.349 - 20634.628: 69.7416% ( 63) 00:09:33.909 20634.628 - 20739.907: 70.8984% ( 77) 00:09:33.909 20739.907 - 20845.186: 72.4459% ( 103) 00:09:33.909 20845.186 - 20950.464: 74.0685% ( 108) 00:09:33.909 20950.464 - 21055.743: 75.4507% ( 92) 00:09:33.909 21055.743 - 21161.022: 76.8329% ( 92) 00:09:33.909 21161.022 - 21266.300: 78.1851% ( 90) 00:09:33.909 21266.300 - 21371.579: 79.2368% ( 70) 00:09:33.909 21371.579 - 21476.858: 80.1683% ( 62) 00:09:33.909 21476.858 - 21582.137: 80.9495% ( 52) 00:09:33.909 21582.137 - 21687.415: 81.8510% ( 60) 00:09:33.909 21687.415 - 21792.694: 83.0078% ( 77) 00:09:33.909 21792.694 - 21897.973: 84.1046% ( 73) 00:09:33.909 21897.973 - 22003.251: 84.9309% ( 55) 00:09:33.909 22003.251 - 22108.530: 85.7121% ( 52) 00:09:33.909 22108.530 - 22213.809: 86.4633% ( 50) 00:09:33.909 22213.809 - 22319.088: 87.2446% ( 52) 00:09:33.909 22319.088 - 22424.366: 88.0559% ( 54) 00:09:33.909 22424.366 - 22529.645: 88.9874% ( 62) 00:09:33.909 22529.645 - 22634.924: 89.6785% ( 46) 00:09:33.909 22634.924 - 22740.202: 90.3095% ( 42) 00:09:33.909 22740.202 - 22845.481: 90.9405% ( 42) 00:09:33.909 22845.481 - 22950.760: 91.4814% ( 36) 00:09:33.909 22950.760 - 23056.039: 92.0072% ( 35) 00:09:33.909 23056.039 - 23161.317: 92.4730% ( 31) 00:09:33.909 23161.317 - 23266.596: 92.8786% ( 27) 00:09:33.909 23266.596 - 23371.875: 93.1791% ( 20) 00:09:33.909 23371.875 - 23477.153: 93.3594% ( 12) 00:09:33.909 23477.153 - 23582.432: 93.5547% ( 13) 00:09:33.909 23582.432 - 23687.711: 93.6448% ( 6) 00:09:33.909 23687.711 - 23792.990: 93.8251% ( 12) 00:09:33.909 23792.990 - 23898.268: 94.0204% ( 13) 00:09:33.909 23898.268 - 24003.547: 94.2608% ( 16) 00:09:33.909 24003.547 - 24108.826: 94.4411% ( 12) 00:09:33.909 24108.826 - 24214.104: 94.5613% ( 8) 00:09:33.909 24214.104 - 24319.383: 94.8017% ( 16) 00:09:33.909 24319.383 - 24424.662: 94.9369% ( 9) 00:09:33.909 24424.662 - 24529.941: 94.9970% ( 4) 00:09:33.909 24529.941 - 24635.219: 95.0721% ( 5) 00:09:33.909 24635.219 - 24740.498: 95.1472% ( 5) 00:09:33.909 24740.498 - 24845.777: 95.2224% ( 5) 00:09:33.909 24845.777 - 24951.055: 95.3125% ( 6) 00:09:33.909 24951.055 - 25056.334: 95.4026% ( 6) 00:09:33.909 25056.334 - 25161.613: 95.4778% ( 5) 00:09:33.909 25161.613 - 25266.892: 95.6430% ( 11) 00:09:33.909 25266.892 - 25372.170: 95.7632% ( 8) 00:09:33.909 25372.170 - 25477.449: 95.9736% ( 14) 00:09:33.909 25477.449 - 25582.728: 96.4243% ( 30) 00:09:33.909 25582.728 - 25688.006: 96.6647% ( 16) 00:09:33.909 25688.006 - 25793.285: 96.8450% ( 12) 00:09:33.909 25793.285 - 25898.564: 97.0102% ( 11) 00:09:33.909 25898.564 - 26003.843: 97.2206% ( 14) 00:09:33.910 26003.843 - 26109.121: 97.4910% ( 18) 00:09:33.910 26109.121 - 26214.400: 97.9718% ( 32) 00:09:33.910 26214.400 - 26319.679: 98.2121% ( 16) 00:09:33.910 26319.679 - 26424.957: 98.3624% ( 10) 00:09:33.910 26424.957 - 26530.236: 98.4375% ( 5) 00:09:33.910 26530.236 - 26635.515: 98.4826% ( 3) 00:09:33.910 26635.515 - 26740.794: 98.5577% ( 5) 00:09:33.910 26740.794 - 26846.072: 98.6328% ( 5) 00:09:33.910 26846.072 - 26951.351: 98.6779% ( 3) 00:09:33.910 26951.351 - 27161.908: 98.7680% ( 6) 00:09:33.910 27161.908 - 27372.466: 98.8431% ( 5) 00:09:33.910 27372.466 - 27583.023: 98.9183% ( 5) 00:09:33.910 27583.023 - 27793.581: 98.9784% ( 4) 00:09:33.910 27793.581 - 28004.138: 99.0385% ( 4) 00:09:33.910 29478.040 - 29688.598: 99.1286% ( 6) 00:09:33.910 29688.598 - 29899.155: 99.2338% ( 7) 00:09:33.910 29899.155 - 30109.712: 99.3239% ( 6) 00:09:33.910 30109.712 - 30320.270: 99.3990% ( 5) 00:09:33.910 30320.270 - 30530.827: 99.4892% ( 6) 00:09:33.910 30530.827 - 30741.385: 99.5793% ( 6) 00:09:33.910 30741.385 - 30951.942: 99.6544% ( 5) 00:09:33.910 30951.942 - 31162.500: 99.7446% ( 6) 00:09:33.910 31162.500 - 31373.057: 99.8498% ( 7) 00:09:33.910 31373.057 - 31583.614: 99.9399% ( 6) 00:09:33.910 31583.614 - 31794.172: 100.0000% ( 4) 00:09:33.910 00:09:33.910 15:02:34 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:09:33.910 00:09:33.910 real 0m2.966s 00:09:33.910 user 0m2.405s 00:09:33.910 sys 0m0.420s 00:09:33.910 ************************************ 00:09:33.910 END TEST nvme_perf 00:09:33.910 ************************************ 00:09:33.910 15:02:34 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.910 15:02:34 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:09:33.910 15:02:34 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:33.910 15:02:34 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:33.910 15:02:34 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.910 15:02:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:33.910 ************************************ 00:09:33.910 START TEST nvme_hello_world 00:09:33.910 ************************************ 00:09:33.910 15:02:34 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:34.169 Initializing NVMe Controllers 00:09:34.169 Attached to 0000:00:10.0 00:09:34.169 Namespace ID: 1 size: 6GB 00:09:34.169 Attached to 0000:00:11.0 00:09:34.169 Namespace ID: 1 size: 5GB 00:09:34.169 Attached to 0000:00:13.0 00:09:34.169 Namespace ID: 1 size: 1GB 00:09:34.169 Attached to 0000:00:12.0 00:09:34.169 Namespace ID: 1 size: 4GB 00:09:34.169 Namespace ID: 2 size: 4GB 00:09:34.169 Namespace ID: 3 size: 4GB 00:09:34.169 Initialization complete. 00:09:34.169 INFO: using host memory buffer for IO 00:09:34.169 Hello world! 00:09:34.169 INFO: using host memory buffer for IO 00:09:34.169 Hello world! 00:09:34.169 INFO: using host memory buffer for IO 00:09:34.169 Hello world! 00:09:34.169 INFO: using host memory buffer for IO 00:09:34.169 Hello world! 00:09:34.169 INFO: using host memory buffer for IO 00:09:34.169 Hello world! 00:09:34.169 INFO: using host memory buffer for IO 00:09:34.169 Hello world! 00:09:34.429 00:09:34.429 real 0m0.306s 00:09:34.429 user 0m0.114s 00:09:34.429 sys 0m0.150s 00:09:34.429 15:02:35 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.429 ************************************ 00:09:34.429 END TEST nvme_hello_world 00:09:34.429 ************************************ 00:09:34.429 15:02:35 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:09:34.429 15:02:35 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:34.429 15:02:35 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:34.429 15:02:35 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.429 15:02:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:34.429 ************************************ 00:09:34.429 START TEST nvme_sgl 00:09:34.429 ************************************ 00:09:34.429 15:02:35 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:34.689 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:09:34.689 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:09:34.689 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:09:34.689 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:09:34.689 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:09:34.689 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:09:34.689 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:09:34.689 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:09:34.689 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:09:34.689 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:09:34.690 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:09:34.690 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:09:34.690 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:09:34.690 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:09:34.690 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:09:34.690 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:09:34.690 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:09:34.690 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:09:34.690 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:09:34.690 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:09:34.690 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:09:34.690 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:09:34.690 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:09:34.690 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:09:34.690 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:09:34.690 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:09:34.690 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:09:34.690 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:09:34.690 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:09:34.690 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:09:34.690 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:09:34.690 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:09:34.690 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:09:34.690 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:09:34.690 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:09:34.690 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:09:34.690 NVMe Readv/Writev Request test 00:09:34.690 Attached to 0000:00:10.0 00:09:34.690 Attached to 0000:00:11.0 00:09:34.690 Attached to 0000:00:13.0 00:09:34.690 Attached to 0000:00:12.0 00:09:34.690 0000:00:10.0: build_io_request_2 test passed 00:09:34.690 0000:00:10.0: build_io_request_4 test passed 00:09:34.690 0000:00:10.0: build_io_request_5 test passed 00:09:34.690 0000:00:10.0: build_io_request_6 test passed 00:09:34.690 0000:00:10.0: build_io_request_7 test passed 00:09:34.690 0000:00:10.0: build_io_request_10 test passed 00:09:34.690 0000:00:11.0: build_io_request_2 test passed 00:09:34.690 0000:00:11.0: build_io_request_4 test passed 00:09:34.690 0000:00:11.0: build_io_request_5 test passed 00:09:34.690 0000:00:11.0: build_io_request_6 test passed 00:09:34.690 0000:00:11.0: build_io_request_7 test passed 00:09:34.690 0000:00:11.0: build_io_request_10 test passed 00:09:34.690 Cleaning up... 00:09:34.690 00:09:34.690 real 0m0.395s 00:09:34.690 user 0m0.188s 00:09:34.690 sys 0m0.155s 00:09:34.690 15:02:35 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.690 ************************************ 00:09:34.690 END TEST nvme_sgl 00:09:34.690 ************************************ 00:09:34.690 15:02:35 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:09:34.950 15:02:35 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:34.950 15:02:35 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:34.950 15:02:35 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.950 15:02:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:34.950 ************************************ 00:09:34.950 START TEST nvme_e2edp 00:09:34.950 ************************************ 00:09:34.950 15:02:35 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:35.209 NVMe Write/Read with End-to-End data protection test 00:09:35.209 Attached to 0000:00:10.0 00:09:35.209 Attached to 0000:00:11.0 00:09:35.209 Attached to 0000:00:13.0 00:09:35.209 Attached to 0000:00:12.0 00:09:35.209 Cleaning up... 00:09:35.209 00:09:35.209 real 0m0.309s 00:09:35.209 user 0m0.102s 00:09:35.209 sys 0m0.161s 00:09:35.209 15:02:35 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:35.209 15:02:35 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:09:35.209 ************************************ 00:09:35.209 END TEST nvme_e2edp 00:09:35.209 ************************************ 00:09:35.209 15:02:35 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:35.209 15:02:35 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:35.209 15:02:35 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:35.209 15:02:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:35.209 ************************************ 00:09:35.209 START TEST nvme_reserve 00:09:35.209 ************************************ 00:09:35.209 15:02:35 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:35.468 ===================================================== 00:09:35.468 NVMe Controller at PCI bus 0, device 16, function 0 00:09:35.468 ===================================================== 00:09:35.468 Reservations: Not Supported 00:09:35.468 ===================================================== 00:09:35.468 NVMe Controller at PCI bus 0, device 17, function 0 00:09:35.468 ===================================================== 00:09:35.468 Reservations: Not Supported 00:09:35.468 ===================================================== 00:09:35.468 NVMe Controller at PCI bus 0, device 19, function 0 00:09:35.468 ===================================================== 00:09:35.468 Reservations: Not Supported 00:09:35.468 ===================================================== 00:09:35.468 NVMe Controller at PCI bus 0, device 18, function 0 00:09:35.468 ===================================================== 00:09:35.468 Reservations: Not Supported 00:09:35.468 Reservation test passed 00:09:35.468 00:09:35.468 real 0m0.345s 00:09:35.468 user 0m0.123s 00:09:35.468 sys 0m0.173s 00:09:35.468 15:02:36 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:35.468 ************************************ 00:09:35.468 15:02:36 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:09:35.468 END TEST nvme_reserve 00:09:35.468 ************************************ 00:09:35.727 15:02:36 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:35.727 15:02:36 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:35.727 15:02:36 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:35.727 15:02:36 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:35.727 ************************************ 00:09:35.727 START TEST nvme_err_injection 00:09:35.727 ************************************ 00:09:35.727 15:02:36 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:35.986 NVMe Error Injection test 00:09:35.986 Attached to 0000:00:10.0 00:09:35.986 Attached to 0000:00:11.0 00:09:35.986 Attached to 0000:00:13.0 00:09:35.986 Attached to 0000:00:12.0 00:09:35.986 0000:00:11.0: get features failed as expected 00:09:35.986 0000:00:13.0: get features failed as expected 00:09:35.986 0000:00:12.0: get features failed as expected 00:09:35.986 0000:00:10.0: get features failed as expected 00:09:35.986 0000:00:10.0: get features successfully as expected 00:09:35.986 0000:00:11.0: get features successfully as expected 00:09:35.986 0000:00:13.0: get features successfully as expected 00:09:35.986 0000:00:12.0: get features successfully as expected 00:09:35.986 0000:00:13.0: read failed as expected 00:09:35.986 0000:00:10.0: read failed as expected 00:09:35.986 0000:00:11.0: read failed as expected 00:09:35.986 0000:00:12.0: read failed as expected 00:09:35.986 0000:00:10.0: read successfully as expected 00:09:35.986 0000:00:11.0: read successfully as expected 00:09:35.986 0000:00:13.0: read successfully as expected 00:09:35.986 0000:00:12.0: read successfully as expected 00:09:35.986 Cleaning up... 00:09:35.986 00:09:35.986 real 0m0.345s 00:09:35.986 user 0m0.126s 00:09:35.986 sys 0m0.171s 00:09:35.986 15:02:36 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:35.986 15:02:36 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:09:35.986 ************************************ 00:09:35.986 END TEST nvme_err_injection 00:09:35.986 ************************************ 00:09:35.986 15:02:36 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:35.986 15:02:36 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:09:35.986 15:02:36 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:35.986 15:02:36 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:35.986 ************************************ 00:09:35.986 START TEST nvme_overhead 00:09:35.986 ************************************ 00:09:35.986 15:02:36 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:37.405 Initializing NVMe Controllers 00:09:37.405 Attached to 0000:00:10.0 00:09:37.405 Attached to 0000:00:11.0 00:09:37.405 Attached to 0000:00:13.0 00:09:37.405 Attached to 0000:00:12.0 00:09:37.405 Initialization complete. Launching workers. 00:09:37.405 submit (in ns) avg, min, max = 14489.6, 12359.8, 59028.1 00:09:37.405 complete (in ns) avg, min, max = 9504.4, 7803.2, 110873.9 00:09:37.405 00:09:37.405 Submit histogram 00:09:37.405 ================ 00:09:37.405 Range in us Cumulative Count 00:09:37.405 12.337 - 12.389: 0.0114% ( 1) 00:09:37.405 12.800 - 12.851: 0.0569% ( 4) 00:09:37.405 12.851 - 12.903: 0.0910% ( 3) 00:09:37.405 12.903 - 12.954: 0.1707% ( 7) 00:09:37.405 12.954 - 13.006: 0.3528% ( 16) 00:09:37.405 13.006 - 13.057: 0.4552% ( 9) 00:09:37.405 13.057 - 13.108: 0.7055% ( 22) 00:09:37.405 13.108 - 13.160: 1.0127% ( 27) 00:09:37.405 13.160 - 13.263: 2.0596% ( 92) 00:09:37.405 13.263 - 13.365: 3.8462% ( 157) 00:09:37.405 13.365 - 13.468: 6.7023% ( 251) 00:09:37.405 13.468 - 13.571: 11.3678% ( 410) 00:09:37.405 13.571 - 13.674: 17.7401% ( 560) 00:09:37.405 13.674 - 13.777: 25.5121% ( 683) 00:09:37.405 13.777 - 13.880: 34.5357% ( 793) 00:09:37.405 13.880 - 13.982: 44.9249% ( 913) 00:09:37.405 13.982 - 14.085: 54.8589% ( 873) 00:09:37.405 14.085 - 14.188: 64.3605% ( 835) 00:09:37.405 14.188 - 14.291: 72.9631% ( 756) 00:09:37.405 14.291 - 14.394: 79.4948% ( 574) 00:09:37.405 14.394 - 14.496: 84.3650% ( 428) 00:09:37.405 14.496 - 14.599: 87.5057% ( 276) 00:09:37.405 14.599 - 14.702: 89.7019% ( 193) 00:09:37.405 14.702 - 14.805: 91.3405% ( 144) 00:09:37.405 14.805 - 14.908: 92.3532% ( 89) 00:09:37.405 14.908 - 15.010: 92.9449% ( 52) 00:09:37.405 15.010 - 15.113: 93.2863% ( 30) 00:09:37.405 15.113 - 15.216: 93.6959% ( 36) 00:09:37.405 15.216 - 15.319: 93.9235% ( 20) 00:09:37.405 15.319 - 15.422: 94.0487% ( 11) 00:09:37.405 15.422 - 15.524: 94.1170% ( 6) 00:09:37.405 15.524 - 15.627: 94.1625% ( 4) 00:09:37.405 15.627 - 15.730: 94.1853% ( 2) 00:09:37.405 15.730 - 15.833: 94.1966% ( 1) 00:09:37.405 15.833 - 15.936: 94.2308% ( 3) 00:09:37.405 15.936 - 16.039: 94.2649% ( 3) 00:09:37.405 16.039 - 16.141: 94.3446% ( 7) 00:09:37.405 16.141 - 16.244: 94.4128% ( 6) 00:09:37.405 16.347 - 16.450: 94.4242% ( 1) 00:09:37.405 16.450 - 16.553: 94.4356% ( 1) 00:09:37.405 16.553 - 16.655: 94.4584% ( 2) 00:09:37.405 16.655 - 16.758: 94.5039% ( 4) 00:09:37.405 16.758 - 16.861: 94.5266% ( 2) 00:09:37.405 16.861 - 16.964: 94.5380% ( 1) 00:09:37.405 16.964 - 17.067: 94.5494% ( 1) 00:09:37.405 17.067 - 17.169: 94.5608% ( 1) 00:09:37.405 17.169 - 17.272: 94.5835% ( 2) 00:09:37.405 17.272 - 17.375: 94.6177% ( 3) 00:09:37.405 17.375 - 17.478: 94.6404% ( 2) 00:09:37.405 17.478 - 17.581: 94.6518% ( 1) 00:09:37.405 17.581 - 17.684: 94.6746% ( 2) 00:09:37.405 17.684 - 17.786: 94.7087% ( 3) 00:09:37.405 17.786 - 17.889: 94.7315% ( 2) 00:09:37.405 17.889 - 17.992: 94.7428% ( 1) 00:09:37.405 17.992 - 18.095: 94.7770% ( 3) 00:09:37.405 18.198 - 18.300: 94.7997% ( 2) 00:09:37.405 18.300 - 18.403: 94.8111% ( 1) 00:09:37.405 18.403 - 18.506: 94.8339% ( 2) 00:09:37.405 18.506 - 18.609: 94.8794% ( 4) 00:09:37.405 18.609 - 18.712: 94.9021% ( 2) 00:09:37.405 18.712 - 18.814: 94.9249% ( 2) 00:09:37.405 18.814 - 18.917: 94.9932% ( 6) 00:09:37.405 18.917 - 19.020: 95.0956% ( 9) 00:09:37.405 19.020 - 19.123: 95.1752% ( 7) 00:09:37.405 19.123 - 19.226: 95.2663% ( 8) 00:09:37.405 19.226 - 19.329: 95.3345% ( 6) 00:09:37.405 19.329 - 19.431: 95.4142% ( 7) 00:09:37.405 19.431 - 19.534: 95.4483% ( 3) 00:09:37.405 19.534 - 19.637: 95.6076% ( 14) 00:09:37.405 19.637 - 19.740: 95.6759% ( 6) 00:09:37.405 19.740 - 19.843: 95.7897% ( 10) 00:09:37.405 19.843 - 19.945: 95.9718% ( 16) 00:09:37.405 19.945 - 20.048: 96.1880% ( 19) 00:09:37.405 20.048 - 20.151: 96.4269% ( 21) 00:09:37.405 20.151 - 20.254: 96.5635% ( 12) 00:09:37.405 20.254 - 20.357: 96.6773% ( 10) 00:09:37.405 20.357 - 20.459: 96.9049% ( 20) 00:09:37.405 20.459 - 20.562: 97.0528% ( 13) 00:09:37.405 20.562 - 20.665: 97.1666% ( 10) 00:09:37.405 20.665 - 20.768: 97.3031% ( 12) 00:09:37.405 20.768 - 20.871: 97.3942% ( 8) 00:09:37.405 20.871 - 20.973: 97.5193% ( 11) 00:09:37.405 20.973 - 21.076: 97.6331% ( 10) 00:09:37.405 21.076 - 21.179: 97.7128% ( 7) 00:09:37.405 21.179 - 21.282: 97.7469% ( 3) 00:09:37.405 21.282 - 21.385: 97.8493% ( 9) 00:09:37.405 21.385 - 21.488: 97.9062% ( 5) 00:09:37.405 21.488 - 21.590: 97.9404% ( 3) 00:09:37.405 21.590 - 21.693: 98.0542% ( 10) 00:09:37.405 21.693 - 21.796: 98.0883% ( 3) 00:09:37.405 21.796 - 21.899: 98.1680% ( 7) 00:09:37.405 21.899 - 22.002: 98.1793% ( 1) 00:09:37.405 22.002 - 22.104: 98.2135% ( 3) 00:09:37.405 22.104 - 22.207: 98.2590% ( 4) 00:09:37.405 22.207 - 22.310: 98.2931% ( 3) 00:09:37.405 22.310 - 22.413: 98.3500% ( 5) 00:09:37.405 22.413 - 22.516: 98.3614% ( 1) 00:09:37.405 22.516 - 22.618: 98.3842% ( 2) 00:09:37.405 22.721 - 22.824: 98.4069% ( 2) 00:09:37.405 22.927 - 23.030: 98.4411% ( 3) 00:09:37.405 23.133 - 23.235: 98.4524% ( 1) 00:09:37.405 23.235 - 23.338: 98.4638% ( 1) 00:09:37.405 23.338 - 23.441: 98.4866% ( 2) 00:09:37.405 23.441 - 23.544: 98.4980% ( 1) 00:09:37.405 23.647 - 23.749: 98.5207% ( 2) 00:09:37.405 23.749 - 23.852: 98.5548% ( 3) 00:09:37.405 23.852 - 23.955: 98.5776% ( 2) 00:09:37.405 23.955 - 24.058: 98.6004% ( 2) 00:09:37.405 24.058 - 24.161: 98.6231% ( 2) 00:09:37.405 24.161 - 24.263: 98.6573% ( 3) 00:09:37.405 24.263 - 24.366: 98.6686% ( 1) 00:09:37.405 24.366 - 24.469: 98.7142% ( 4) 00:09:37.405 24.572 - 24.675: 98.7483% ( 3) 00:09:37.405 24.675 - 24.778: 98.8279% ( 7) 00:09:37.405 24.778 - 24.880: 98.8962% ( 6) 00:09:37.405 24.880 - 24.983: 98.9304% ( 3) 00:09:37.405 24.983 - 25.086: 98.9531% ( 2) 00:09:37.405 25.086 - 25.189: 98.9986% ( 4) 00:09:37.405 25.189 - 25.292: 99.0442% ( 4) 00:09:37.405 25.292 - 25.394: 99.0783% ( 3) 00:09:37.405 25.600 - 25.703: 99.1124% ( 3) 00:09:37.405 25.703 - 25.806: 99.1352% ( 2) 00:09:37.405 25.806 - 25.908: 99.1693% ( 3) 00:09:37.405 25.908 - 26.011: 99.2035% ( 3) 00:09:37.405 26.011 - 26.114: 99.2376% ( 3) 00:09:37.405 26.114 - 26.217: 99.2717% ( 3) 00:09:37.405 26.217 - 26.320: 99.2831% ( 1) 00:09:37.405 26.320 - 26.525: 99.2945% ( 1) 00:09:37.405 26.731 - 26.937: 99.3059% ( 1) 00:09:37.405 26.937 - 27.142: 99.3286% ( 2) 00:09:37.405 27.142 - 27.348: 99.3514% ( 2) 00:09:37.405 27.348 - 27.553: 99.3741% ( 2) 00:09:37.406 27.553 - 27.759: 99.3855% ( 1) 00:09:37.406 27.759 - 27.965: 99.3969% ( 1) 00:09:37.406 28.170 - 28.376: 99.4083% ( 1) 00:09:37.406 28.993 - 29.198: 99.4197% ( 1) 00:09:37.406 29.404 - 29.610: 99.4538% ( 3) 00:09:37.406 29.610 - 29.815: 99.4652% ( 1) 00:09:37.406 29.815 - 30.021: 99.4993% ( 3) 00:09:37.406 30.021 - 30.227: 99.5676% ( 6) 00:09:37.406 30.227 - 30.432: 99.6131% ( 4) 00:09:37.406 30.432 - 30.638: 99.6814% ( 6) 00:09:37.406 30.638 - 30.843: 99.7497% ( 6) 00:09:37.406 30.843 - 31.049: 99.7724% ( 2) 00:09:37.406 31.049 - 31.255: 99.8293% ( 5) 00:09:37.406 31.255 - 31.460: 99.8407% ( 1) 00:09:37.406 31.460 - 31.666: 99.8521% ( 1) 00:09:37.406 31.871 - 32.077: 99.8635% ( 1) 00:09:37.406 32.283 - 32.488: 99.8748% ( 1) 00:09:37.406 35.778 - 35.984: 99.8862% ( 1) 00:09:37.406 40.096 - 40.302: 99.9090% ( 2) 00:09:37.406 40.302 - 40.508: 99.9317% ( 2) 00:09:37.406 40.713 - 40.919: 99.9431% ( 1) 00:09:37.406 41.330 - 41.536: 99.9545% ( 1) 00:09:37.406 43.798 - 44.003: 99.9659% ( 1) 00:09:37.406 49.144 - 49.349: 99.9772% ( 1) 00:09:37.406 49.761 - 49.966: 99.9886% ( 1) 00:09:37.406 58.808 - 59.219: 100.0000% ( 1) 00:09:37.406 00:09:37.406 Complete histogram 00:09:37.406 ================== 00:09:37.406 Range in us Cumulative Count 00:09:37.406 7.762 - 7.814: 0.0114% ( 1) 00:09:37.406 7.814 - 7.865: 0.0569% ( 4) 00:09:37.406 7.865 - 7.916: 0.4210% ( 32) 00:09:37.406 7.916 - 7.968: 1.3086% ( 78) 00:09:37.406 7.968 - 8.019: 2.4465% ( 100) 00:09:37.406 8.019 - 8.071: 3.9144% ( 129) 00:09:37.406 8.071 - 8.122: 5.0523% ( 100) 00:09:37.406 8.122 - 8.173: 6.2472% ( 105) 00:09:37.406 8.173 - 8.225: 7.3509% ( 97) 00:09:37.406 8.225 - 8.276: 8.5685% ( 107) 00:09:37.406 8.276 - 8.328: 9.8771% ( 115) 00:09:37.406 8.328 - 8.379: 11.6409% ( 155) 00:09:37.406 8.379 - 8.431: 13.2681% ( 143) 00:09:37.406 8.431 - 8.482: 15.0205% ( 154) 00:09:37.406 8.482 - 8.533: 17.5239% ( 220) 00:09:37.406 8.533 - 8.585: 22.2690% ( 417) 00:09:37.406 8.585 - 8.636: 27.3896% ( 450) 00:09:37.406 8.636 - 8.688: 31.5658% ( 367) 00:09:37.406 8.688 - 8.739: 35.7761% ( 370) 00:09:37.406 8.739 - 8.790: 39.4743% ( 325) 00:09:37.406 8.790 - 8.842: 42.5580% ( 271) 00:09:37.406 8.842 - 8.893: 45.1183% ( 225) 00:09:37.406 8.893 - 8.945: 46.9504% ( 161) 00:09:37.406 8.945 - 8.996: 48.4980% ( 136) 00:09:37.406 8.996 - 9.047: 49.8066% ( 115) 00:09:37.406 9.047 - 9.099: 51.4110% ( 141) 00:09:37.406 9.099 - 9.150: 52.9358% ( 134) 00:09:37.406 9.150 - 9.202: 54.4151% ( 130) 00:09:37.406 9.202 - 9.253: 56.3496% ( 170) 00:09:37.406 9.253 - 9.304: 58.6026% ( 198) 00:09:37.406 9.304 - 9.356: 61.0036% ( 211) 00:09:37.406 9.356 - 9.407: 63.9736% ( 261) 00:09:37.406 9.407 - 9.459: 67.1370% ( 278) 00:09:37.406 9.459 - 9.510: 70.4597% ( 292) 00:09:37.406 9.510 - 9.561: 73.7028% ( 285) 00:09:37.406 9.561 - 9.613: 76.6614% ( 260) 00:09:37.406 9.613 - 9.664: 79.7337% ( 270) 00:09:37.406 9.664 - 9.716: 82.4989% ( 243) 00:09:37.406 9.716 - 9.767: 84.7633% ( 199) 00:09:37.406 9.767 - 9.818: 86.6295% ( 164) 00:09:37.406 9.818 - 9.870: 88.5412% ( 168) 00:09:37.406 9.870 - 9.921: 89.9522% ( 124) 00:09:37.406 9.921 - 9.973: 91.1356% ( 104) 00:09:37.406 9.973 - 10.024: 92.0005% ( 76) 00:09:37.406 10.024 - 10.076: 92.6718% ( 59) 00:09:37.406 10.076 - 10.127: 93.1611% ( 43) 00:09:37.406 10.127 - 10.178: 93.5935% ( 38) 00:09:37.406 10.178 - 10.230: 93.9690% ( 33) 00:09:37.406 10.230 - 10.281: 94.1511% ( 16) 00:09:37.406 10.281 - 10.333: 94.2877% ( 12) 00:09:37.406 10.333 - 10.384: 94.4470% ( 14) 00:09:37.406 10.384 - 10.435: 94.5380% ( 8) 00:09:37.406 10.435 - 10.487: 94.7201% ( 16) 00:09:37.406 10.487 - 10.538: 94.8111% ( 8) 00:09:37.406 10.538 - 10.590: 94.9021% ( 8) 00:09:37.406 10.590 - 10.641: 94.9590% ( 5) 00:09:37.406 10.641 - 10.692: 95.0387% ( 7) 00:09:37.406 10.692 - 10.744: 95.1297% ( 8) 00:09:37.406 10.744 - 10.795: 95.1980% ( 6) 00:09:37.406 10.795 - 10.847: 95.2435% ( 4) 00:09:37.406 10.847 - 10.898: 95.2890% ( 4) 00:09:37.406 10.898 - 10.949: 95.3687% ( 7) 00:09:37.406 10.949 - 11.001: 95.4597% ( 8) 00:09:37.406 11.001 - 11.052: 95.5052% ( 4) 00:09:37.406 11.052 - 11.104: 95.5394% ( 3) 00:09:37.406 11.104 - 11.155: 95.5621% ( 2) 00:09:37.406 11.155 - 11.206: 95.5849% ( 2) 00:09:37.406 11.206 - 11.258: 95.6190% ( 3) 00:09:37.406 11.258 - 11.309: 95.6304% ( 1) 00:09:37.406 11.309 - 11.361: 95.6532% ( 2) 00:09:37.406 11.361 - 11.412: 95.6759% ( 2) 00:09:37.406 11.412 - 11.463: 95.6873% ( 1) 00:09:37.406 11.463 - 11.515: 95.6987% ( 1) 00:09:37.406 11.515 - 11.566: 95.7101% ( 1) 00:09:37.406 11.566 - 11.618: 95.7214% ( 1) 00:09:37.406 11.618 - 11.669: 95.7556% ( 3) 00:09:37.406 11.875 - 11.926: 95.7897% ( 3) 00:09:37.406 11.926 - 11.978: 95.8125% ( 2) 00:09:37.406 11.978 - 12.029: 95.8466% ( 3) 00:09:37.406 12.029 - 12.080: 95.8807% ( 3) 00:09:37.406 12.183 - 12.235: 95.8921% ( 1) 00:09:37.406 12.337 - 12.389: 95.9149% ( 2) 00:09:37.406 12.440 - 12.492: 95.9263% ( 1) 00:09:37.406 12.646 - 12.697: 95.9376% ( 1) 00:09:37.406 12.697 - 12.749: 95.9604% ( 2) 00:09:37.406 12.749 - 12.800: 95.9718% ( 1) 00:09:37.406 12.800 - 12.851: 95.9832% ( 1) 00:09:37.406 13.160 - 13.263: 95.9945% ( 1) 00:09:37.406 13.263 - 13.365: 96.0059% ( 1) 00:09:37.406 13.365 - 13.468: 96.0173% ( 1) 00:09:37.406 13.468 - 13.571: 96.0287% ( 1) 00:09:37.406 13.571 - 13.674: 96.0514% ( 2) 00:09:37.406 13.777 - 13.880: 96.0742% ( 2) 00:09:37.406 13.982 - 14.085: 96.0970% ( 2) 00:09:37.406 14.085 - 14.188: 96.1083% ( 1) 00:09:37.406 14.291 - 14.394: 96.1425% ( 3) 00:09:37.406 14.394 - 14.496: 96.2107% ( 6) 00:09:37.406 14.496 - 14.599: 96.2676% ( 5) 00:09:37.406 14.599 - 14.702: 96.3132% ( 4) 00:09:37.406 14.702 - 14.805: 96.3473% ( 3) 00:09:37.406 14.805 - 14.908: 96.3928% ( 4) 00:09:37.406 14.908 - 15.010: 96.4269% ( 3) 00:09:37.406 15.010 - 15.113: 96.4611% ( 3) 00:09:37.406 15.113 - 15.216: 96.5180% ( 5) 00:09:37.406 15.216 - 15.319: 96.5749% ( 5) 00:09:37.406 15.319 - 15.422: 96.6431% ( 6) 00:09:37.406 15.422 - 15.524: 96.7228% ( 7) 00:09:37.406 15.524 - 15.627: 96.7569% ( 3) 00:09:37.406 15.627 - 15.730: 96.8252% ( 6) 00:09:37.406 15.730 - 15.833: 96.8935% ( 6) 00:09:37.406 15.833 - 15.936: 96.9504% ( 5) 00:09:37.406 15.936 - 16.039: 97.0414% ( 8) 00:09:37.406 16.039 - 16.141: 97.1097% ( 6) 00:09:37.406 16.141 - 16.244: 97.1438% ( 3) 00:09:37.406 16.244 - 16.347: 97.2007% ( 5) 00:09:37.406 16.347 - 16.450: 97.2235% ( 2) 00:09:37.406 16.450 - 16.553: 97.2690% ( 4) 00:09:37.406 16.553 - 16.655: 97.3145% ( 4) 00:09:37.406 16.655 - 16.758: 97.3714% ( 5) 00:09:37.406 16.758 - 16.861: 97.4056% ( 3) 00:09:37.406 16.861 - 16.964: 97.4624% ( 5) 00:09:37.406 16.964 - 17.067: 97.4852% ( 2) 00:09:37.406 17.067 - 17.169: 97.5080% ( 2) 00:09:37.406 17.169 - 17.272: 97.5307% ( 2) 00:09:37.406 17.272 - 17.375: 97.5649% ( 3) 00:09:37.406 17.375 - 17.478: 97.5876% ( 2) 00:09:37.406 17.478 - 17.581: 97.6331% ( 4) 00:09:37.406 17.581 - 17.684: 97.6673% ( 3) 00:09:37.406 17.684 - 17.786: 97.6900% ( 2) 00:09:37.406 17.786 - 17.889: 97.7242% ( 3) 00:09:37.406 17.889 - 17.992: 97.7811% ( 5) 00:09:37.406 17.992 - 18.095: 97.8266% ( 4) 00:09:37.406 18.095 - 18.198: 97.8607% ( 3) 00:09:37.406 18.198 - 18.300: 97.8949% ( 3) 00:09:37.406 18.300 - 18.403: 97.9973% ( 9) 00:09:37.406 18.403 - 18.506: 98.0428% ( 4) 00:09:37.407 18.506 - 18.609: 98.1224% ( 7) 00:09:37.407 18.609 - 18.712: 98.1452% ( 2) 00:09:37.407 18.712 - 18.814: 98.2476% ( 9) 00:09:37.407 18.814 - 18.917: 98.2931% ( 4) 00:09:37.407 18.917 - 19.020: 98.3386% ( 4) 00:09:37.407 19.020 - 19.123: 98.3500% ( 1) 00:09:37.407 19.123 - 19.226: 98.3728% ( 2) 00:09:37.407 19.226 - 19.329: 98.3955% ( 2) 00:09:37.407 19.431 - 19.534: 98.4866% ( 8) 00:09:37.407 19.534 - 19.637: 98.5093% ( 2) 00:09:37.407 19.637 - 19.740: 98.5435% ( 3) 00:09:37.407 19.740 - 19.843: 98.6004% ( 5) 00:09:37.407 19.843 - 19.945: 98.6117% ( 1) 00:09:37.407 19.945 - 20.048: 98.6231% ( 1) 00:09:37.407 20.048 - 20.151: 98.6573% ( 3) 00:09:37.407 20.151 - 20.254: 98.7369% ( 7) 00:09:37.407 20.254 - 20.357: 98.7824% ( 4) 00:09:37.407 20.357 - 20.459: 98.8052% ( 2) 00:09:37.407 20.459 - 20.562: 98.8735% ( 6) 00:09:37.407 20.562 - 20.665: 98.9417% ( 6) 00:09:37.407 20.665 - 20.768: 98.9645% ( 2) 00:09:37.407 20.768 - 20.871: 98.9986% ( 3) 00:09:37.407 20.871 - 20.973: 99.0669% ( 6) 00:09:37.407 21.076 - 21.179: 99.1010% ( 3) 00:09:37.407 21.179 - 21.282: 99.1352% ( 3) 00:09:37.407 21.282 - 21.385: 99.1807% ( 4) 00:09:37.407 21.385 - 21.488: 99.2262% ( 4) 00:09:37.407 21.488 - 21.590: 99.2490% ( 2) 00:09:37.407 21.590 - 21.693: 99.2604% ( 1) 00:09:37.407 21.796 - 21.899: 99.2831% ( 2) 00:09:37.407 21.899 - 22.002: 99.2945% ( 1) 00:09:37.407 22.002 - 22.104: 99.3059% ( 1) 00:09:37.407 22.104 - 22.207: 99.3173% ( 1) 00:09:37.407 22.207 - 22.310: 99.3400% ( 2) 00:09:37.407 22.413 - 22.516: 99.3514% ( 1) 00:09:37.407 22.516 - 22.618: 99.3741% ( 2) 00:09:37.407 22.927 - 23.030: 99.3855% ( 1) 00:09:37.407 23.338 - 23.441: 99.3969% ( 1) 00:09:37.407 24.469 - 24.572: 99.4197% ( 2) 00:09:37.407 24.572 - 24.675: 99.4310% ( 1) 00:09:37.407 24.675 - 24.778: 99.4424% ( 1) 00:09:37.407 25.086 - 25.189: 99.4652% ( 2) 00:09:37.407 25.189 - 25.292: 99.4766% ( 1) 00:09:37.407 25.394 - 25.497: 99.4879% ( 1) 00:09:37.407 25.497 - 25.600: 99.5221% ( 3) 00:09:37.407 25.600 - 25.703: 99.5448% ( 2) 00:09:37.407 25.806 - 25.908: 99.5676% ( 2) 00:09:37.407 25.908 - 26.011: 99.6017% ( 3) 00:09:37.407 26.011 - 26.114: 99.6359% ( 3) 00:09:37.407 26.114 - 26.217: 99.6700% ( 3) 00:09:37.407 26.217 - 26.320: 99.6928% ( 2) 00:09:37.407 26.320 - 26.525: 99.7497% ( 5) 00:09:37.407 27.142 - 27.348: 99.7724% ( 2) 00:09:37.407 28.376 - 28.582: 99.7838% ( 1) 00:09:37.407 28.582 - 28.787: 99.7952% ( 1) 00:09:37.407 29.198 - 29.404: 99.8066% ( 1) 00:09:37.407 30.021 - 30.227: 99.8293% ( 2) 00:09:37.407 30.432 - 30.638: 99.8407% ( 1) 00:09:37.407 30.843 - 31.049: 99.8521% ( 1) 00:09:37.407 32.077 - 32.283: 99.8635% ( 1) 00:09:37.407 34.133 - 34.339: 99.8748% ( 1) 00:09:37.407 34.750 - 34.956: 99.8862% ( 1) 00:09:37.407 38.040 - 38.246: 99.8976% ( 1) 00:09:37.407 39.274 - 39.480: 99.9090% ( 1) 00:09:37.407 41.741 - 41.947: 99.9203% ( 1) 00:09:37.407 44.209 - 44.414: 99.9317% ( 1) 00:09:37.407 44.414 - 44.620: 99.9431% ( 1) 00:09:37.407 45.854 - 46.059: 99.9545% ( 1) 00:09:37.407 49.144 - 49.349: 99.9659% ( 1) 00:09:37.407 60.042 - 60.453: 99.9772% ( 1) 00:09:37.407 105.279 - 106.101: 99.9886% ( 1) 00:09:37.407 110.214 - 111.036: 100.0000% ( 1) 00:09:37.407 00:09:37.407 00:09:37.407 real 0m1.353s 00:09:37.407 user 0m1.125s 00:09:37.407 sys 0m0.176s 00:09:37.407 15:02:38 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.407 ************************************ 00:09:37.407 END TEST nvme_overhead 00:09:37.407 ************************************ 00:09:37.407 15:02:38 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:09:37.407 15:02:38 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:37.407 15:02:38 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:09:37.407 15:02:38 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.407 15:02:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:37.407 ************************************ 00:09:37.407 START TEST nvme_arbitration 00:09:37.407 ************************************ 00:09:37.407 15:02:38 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:41.594 Initializing NVMe Controllers 00:09:41.594 Attached to 0000:00:10.0 00:09:41.594 Attached to 0000:00:11.0 00:09:41.594 Attached to 0000:00:13.0 00:09:41.594 Attached to 0000:00:12.0 00:09:41.594 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:09:41.594 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:09:41.594 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:09:41.594 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:09:41.594 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:09:41.594 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:09:41.594 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:09:41.594 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:09:41.594 Initialization complete. Launching workers. 00:09:41.594 Starting thread on core 1 with urgent priority queue 00:09:41.594 Starting thread on core 2 with urgent priority queue 00:09:41.594 Starting thread on core 3 with urgent priority queue 00:09:41.594 Starting thread on core 0 with urgent priority queue 00:09:41.594 QEMU NVMe Ctrl (12340 ) core 0: 490.67 IO/s 203.80 secs/100000 ios 00:09:41.594 QEMU NVMe Ctrl (12342 ) core 0: 490.67 IO/s 203.80 secs/100000 ios 00:09:41.594 QEMU NVMe Ctrl (12341 ) core 1: 512.00 IO/s 195.31 secs/100000 ios 00:09:41.594 QEMU NVMe Ctrl (12342 ) core 1: 512.00 IO/s 195.31 secs/100000 ios 00:09:41.594 QEMU NVMe Ctrl (12343 ) core 2: 533.33 IO/s 187.50 secs/100000 ios 00:09:41.594 QEMU NVMe Ctrl (12342 ) core 3: 469.33 IO/s 213.07 secs/100000 ios 00:09:41.594 ======================================================== 00:09:41.594 00:09:41.594 00:09:41.594 real 0m3.458s 00:09:41.594 user 0m9.356s 00:09:41.594 sys 0m0.201s 00:09:41.594 15:02:41 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:41.594 ************************************ 00:09:41.594 END TEST nvme_arbitration 00:09:41.594 ************************************ 00:09:41.594 15:02:41 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:09:41.594 15:02:41 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:41.594 15:02:41 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:41.594 15:02:41 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:41.594 15:02:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:41.594 ************************************ 00:09:41.594 START TEST nvme_single_aen 00:09:41.594 ************************************ 00:09:41.594 15:02:41 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:41.594 Asynchronous Event Request test 00:09:41.594 Attached to 0000:00:10.0 00:09:41.594 Attached to 0000:00:11.0 00:09:41.594 Attached to 0000:00:13.0 00:09:41.594 Attached to 0000:00:12.0 00:09:41.594 Reset controller to setup AER completions for this process 00:09:41.594 Registering asynchronous event callbacks... 00:09:41.594 Getting orig temperature thresholds of all controllers 00:09:41.594 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:41.594 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:41.594 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:41.594 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:41.594 Setting all controllers temperature threshold low to trigger AER 00:09:41.594 Waiting for all controllers temperature threshold to be set lower 00:09:41.594 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:41.594 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:41.594 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:41.594 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:41.594 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:41.594 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:41.594 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:41.594 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:41.594 Waiting for all controllers to trigger AER and reset threshold 00:09:41.594 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:41.594 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:41.594 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:41.594 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:41.594 Cleaning up... 00:09:41.594 00:09:41.594 real 0m0.317s 00:09:41.594 user 0m0.116s 00:09:41.594 sys 0m0.151s 00:09:41.594 15:02:42 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:41.594 15:02:42 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:09:41.594 ************************************ 00:09:41.594 END TEST nvme_single_aen 00:09:41.594 ************************************ 00:09:41.594 15:02:42 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:09:41.594 15:02:42 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:41.594 15:02:42 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:41.594 15:02:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:41.594 ************************************ 00:09:41.594 START TEST nvme_doorbell_aers 00:09:41.594 ************************************ 00:09:41.594 15:02:42 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:09:41.594 15:02:42 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:09:41.594 15:02:42 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:09:41.594 15:02:42 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:09:41.594 15:02:42 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:09:41.594 15:02:42 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:41.594 15:02:42 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:09:41.594 15:02:42 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:41.594 15:02:42 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:41.594 15:02:42 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:41.594 15:02:42 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:41.594 15:02:42 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:41.594 15:02:42 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:41.594 15:02:42 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:41.923 [2024-11-20 15:02:42.526080] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64660) is not found. Dropping the request. 00:09:51.902 Executing: test_write_invalid_db 00:09:51.902 Waiting for AER completion... 00:09:51.902 Failure: test_write_invalid_db 00:09:51.902 00:09:51.902 Executing: test_invalid_db_write_overflow_sq 00:09:51.902 Waiting for AER completion... 00:09:51.902 Failure: test_invalid_db_write_overflow_sq 00:09:51.902 00:09:51.902 Executing: test_invalid_db_write_overflow_cq 00:09:51.902 Waiting for AER completion... 00:09:51.902 Failure: test_invalid_db_write_overflow_cq 00:09:51.902 00:09:51.902 15:02:52 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:51.902 15:02:52 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:51.902 [2024-11-20 15:02:52.589062] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64660) is not found. Dropping the request. 00:10:01.871 Executing: test_write_invalid_db 00:10:01.871 Waiting for AER completion... 00:10:01.871 Failure: test_write_invalid_db 00:10:01.871 00:10:01.871 Executing: test_invalid_db_write_overflow_sq 00:10:01.871 Waiting for AER completion... 00:10:01.871 Failure: test_invalid_db_write_overflow_sq 00:10:01.871 00:10:01.871 Executing: test_invalid_db_write_overflow_cq 00:10:01.871 Waiting for AER completion... 00:10:01.871 Failure: test_invalid_db_write_overflow_cq 00:10:01.871 00:10:01.871 15:03:02 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:01.871 15:03:02 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:01.871 [2024-11-20 15:03:02.685758] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64660) is not found. Dropping the request. 00:10:11.884 Executing: test_write_invalid_db 00:10:11.884 Waiting for AER completion... 00:10:11.884 Failure: test_write_invalid_db 00:10:11.884 00:10:11.884 Executing: test_invalid_db_write_overflow_sq 00:10:11.884 Waiting for AER completion... 00:10:11.884 Failure: test_invalid_db_write_overflow_sq 00:10:11.884 00:10:11.884 Executing: test_invalid_db_write_overflow_cq 00:10:11.884 Waiting for AER completion... 00:10:11.884 Failure: test_invalid_db_write_overflow_cq 00:10:11.884 00:10:11.884 15:03:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:11.884 15:03:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:12.142 [2024-11-20 15:03:12.724822] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64660) is not found. Dropping the request. 00:10:22.147 Executing: test_write_invalid_db 00:10:22.147 Waiting for AER completion... 00:10:22.147 Failure: test_write_invalid_db 00:10:22.147 00:10:22.147 Executing: test_invalid_db_write_overflow_sq 00:10:22.147 Waiting for AER completion... 00:10:22.147 Failure: test_invalid_db_write_overflow_sq 00:10:22.147 00:10:22.147 Executing: test_invalid_db_write_overflow_cq 00:10:22.147 Waiting for AER completion... 00:10:22.147 Failure: test_invalid_db_write_overflow_cq 00:10:22.147 00:10:22.147 ************************************ 00:10:22.147 END TEST nvme_doorbell_aers 00:10:22.147 ************************************ 00:10:22.147 00:10:22.147 real 0m40.358s 00:10:22.147 user 0m28.418s 00:10:22.147 sys 0m11.550s 00:10:22.147 15:03:22 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:22.147 15:03:22 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:10:22.147 15:03:22 nvme -- nvme/nvme.sh@97 -- # uname 00:10:22.147 15:03:22 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:10:22.147 15:03:22 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:10:22.147 15:03:22 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:10:22.147 15:03:22 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:22.147 15:03:22 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:22.147 ************************************ 00:10:22.147 START TEST nvme_multi_aen 00:10:22.147 ************************************ 00:10:22.147 15:03:22 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:10:22.147 [2024-11-20 15:03:22.801553] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64660) is not found. Dropping the request. 00:10:22.147 [2024-11-20 15:03:22.801656] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64660) is not found. Dropping the request. 00:10:22.147 [2024-11-20 15:03:22.801674] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64660) is not found. Dropping the request. 00:10:22.147 [2024-11-20 15:03:22.803520] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64660) is not found. Dropping the request. 00:10:22.147 [2024-11-20 15:03:22.803566] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64660) is not found. Dropping the request. 00:10:22.147 [2024-11-20 15:03:22.803581] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64660) is not found. Dropping the request. 00:10:22.147 [2024-11-20 15:03:22.805151] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64660) is not found. Dropping the request. 00:10:22.147 [2024-11-20 15:03:22.805191] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64660) is not found. Dropping the request. 00:10:22.147 [2024-11-20 15:03:22.805209] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64660) is not found. Dropping the request. 00:10:22.147 [2024-11-20 15:03:22.806619] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64660) is not found. Dropping the request. 00:10:22.147 [2024-11-20 15:03:22.806804] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64660) is not found. Dropping the request. 00:10:22.147 [2024-11-20 15:03:22.806824] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64660) is not found. Dropping the request. 00:10:22.147 Child process pid: 65181 00:10:22.406 [Child] Asynchronous Event Request test 00:10:22.406 [Child] Attached to 0000:00:10.0 00:10:22.406 [Child] Attached to 0000:00:11.0 00:10:22.406 [Child] Attached to 0000:00:13.0 00:10:22.406 [Child] Attached to 0000:00:12.0 00:10:22.406 [Child] Registering asynchronous event callbacks... 00:10:22.406 [Child] Getting orig temperature thresholds of all controllers 00:10:22.406 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:22.406 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:22.406 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:22.406 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:22.406 [Child] Waiting for all controllers to trigger AER and reset threshold 00:10:22.406 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:22.406 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:22.406 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:22.406 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:22.406 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:22.406 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:22.406 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:22.406 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:22.406 [Child] Cleaning up... 00:10:22.406 Asynchronous Event Request test 00:10:22.406 Attached to 0000:00:10.0 00:10:22.406 Attached to 0000:00:11.0 00:10:22.406 Attached to 0000:00:13.0 00:10:22.406 Attached to 0000:00:12.0 00:10:22.406 Reset controller to setup AER completions for this process 00:10:22.406 Registering asynchronous event callbacks... 00:10:22.406 Getting orig temperature thresholds of all controllers 00:10:22.406 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:22.406 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:22.406 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:22.406 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:22.406 Setting all controllers temperature threshold low to trigger AER 00:10:22.406 Waiting for all controllers temperature threshold to be set lower 00:10:22.406 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:22.406 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:10:22.406 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:22.406 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:10:22.406 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:22.406 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:10:22.406 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:22.406 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:10:22.406 Waiting for all controllers to trigger AER and reset threshold 00:10:22.406 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:22.406 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:22.406 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:22.406 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:22.406 Cleaning up... 00:10:22.406 00:10:22.406 real 0m0.665s 00:10:22.406 user 0m0.234s 00:10:22.406 sys 0m0.321s 00:10:22.406 15:03:23 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:22.406 15:03:23 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:10:22.406 ************************************ 00:10:22.406 END TEST nvme_multi_aen 00:10:22.406 ************************************ 00:10:22.665 15:03:23 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:22.665 15:03:23 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:22.665 15:03:23 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:22.665 15:03:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:22.665 ************************************ 00:10:22.665 START TEST nvme_startup 00:10:22.665 ************************************ 00:10:22.665 15:03:23 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:22.924 Initializing NVMe Controllers 00:10:22.924 Attached to 0000:00:10.0 00:10:22.924 Attached to 0000:00:11.0 00:10:22.924 Attached to 0000:00:13.0 00:10:22.924 Attached to 0000:00:12.0 00:10:22.924 Initialization complete. 00:10:22.924 Time used:193596.484 (us). 00:10:22.924 00:10:22.924 real 0m0.306s 00:10:22.924 user 0m0.106s 00:10:22.924 sys 0m0.156s 00:10:22.924 15:03:23 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:22.924 15:03:23 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:10:22.924 ************************************ 00:10:22.924 END TEST nvme_startup 00:10:22.924 ************************************ 00:10:22.924 15:03:23 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:10:22.924 15:03:23 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:22.924 15:03:23 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:22.924 15:03:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:22.924 ************************************ 00:10:22.924 START TEST nvme_multi_secondary 00:10:22.924 ************************************ 00:10:22.924 15:03:23 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:10:22.924 15:03:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:10:22.924 15:03:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65236 00:10:22.924 15:03:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65237 00:10:22.924 15:03:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:22.924 15:03:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:10:27.114 Initializing NVMe Controllers 00:10:27.114 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:27.114 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:27.114 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:27.114 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:27.114 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:10:27.114 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:10:27.114 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:10:27.114 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:10:27.114 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:10:27.114 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:10:27.114 Initialization complete. Launching workers. 00:10:27.114 ======================================================== 00:10:27.114 Latency(us) 00:10:27.114 Device Information : IOPS MiB/s Average min max 00:10:27.114 PCIE (0000:00:10.0) NSID 1 from core 2: 3011.77 11.76 5309.72 1258.29 13583.86 00:10:27.114 PCIE (0000:00:11.0) NSID 1 from core 2: 3011.77 11.76 5311.68 1387.98 14068.27 00:10:27.114 PCIE (0000:00:13.0) NSID 1 from core 2: 3011.77 11.76 5312.05 1339.90 13888.53 00:10:27.114 PCIE (0000:00:12.0) NSID 1 from core 2: 3011.77 11.76 5311.47 1255.71 12631.55 00:10:27.114 PCIE (0000:00:12.0) NSID 2 from core 2: 3011.77 11.76 5312.10 1296.14 12481.90 00:10:27.114 PCIE (0000:00:12.0) NSID 3 from core 2: 3011.77 11.76 5311.54 1328.40 12640.37 00:10:27.114 ======================================================== 00:10:27.114 Total : 18070.65 70.59 5311.43 1255.71 14068.27 00:10:27.114 00:10:27.114 15:03:27 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65236 00:10:27.114 Initializing NVMe Controllers 00:10:27.114 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:27.114 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:27.114 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:27.114 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:27.114 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:10:27.114 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:10:27.114 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:10:27.114 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:10:27.114 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:10:27.114 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:10:27.114 Initialization complete. Launching workers. 00:10:27.114 ======================================================== 00:10:27.114 Latency(us) 00:10:27.114 Device Information : IOPS MiB/s Average min max 00:10:27.114 PCIE (0000:00:10.0) NSID 1 from core 1: 4908.40 19.17 3257.14 1308.33 8615.82 00:10:27.114 PCIE (0000:00:11.0) NSID 1 from core 1: 4908.40 19.17 3259.08 1241.98 9054.33 00:10:27.114 PCIE (0000:00:13.0) NSID 1 from core 1: 4908.40 19.17 3259.00 1278.18 8765.16 00:10:27.114 PCIE (0000:00:12.0) NSID 1 from core 1: 4908.40 19.17 3258.95 1293.39 8397.15 00:10:27.114 PCIE (0000:00:12.0) NSID 2 from core 1: 4908.40 19.17 3258.91 1270.32 8416.86 00:10:27.114 PCIE (0000:00:12.0) NSID 3 from core 1: 4908.40 19.17 3258.85 1245.97 8332.47 00:10:27.114 ======================================================== 00:10:27.114 Total : 29450.37 115.04 3258.66 1241.98 9054.33 00:10:27.114 00:10:28.490 Initializing NVMe Controllers 00:10:28.490 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:28.490 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:28.490 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:28.490 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:28.490 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:28.490 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:28.490 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:28.490 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:28.490 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:28.491 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:28.491 Initialization complete. Launching workers. 00:10:28.491 ======================================================== 00:10:28.491 Latency(us) 00:10:28.491 Device Information : IOPS MiB/s Average min max 00:10:28.491 PCIE (0000:00:10.0) NSID 1 from core 0: 8142.41 31.81 1963.37 903.60 10202.01 00:10:28.491 PCIE (0000:00:11.0) NSID 1 from core 0: 8142.41 31.81 1964.56 935.57 10069.93 00:10:28.491 PCIE (0000:00:13.0) NSID 1 from core 0: 8142.41 31.81 1964.54 931.09 10449.78 00:10:28.491 PCIE (0000:00:12.0) NSID 1 from core 0: 8142.41 31.81 1964.51 924.78 10450.62 00:10:28.491 PCIE (0000:00:12.0) NSID 2 from core 0: 8142.41 31.81 1964.48 913.85 10840.47 00:10:28.491 PCIE (0000:00:12.0) NSID 3 from core 0: 8142.41 31.81 1964.46 878.09 10539.92 00:10:28.491 ======================================================== 00:10:28.491 Total : 48854.47 190.84 1964.32 878.09 10840.47 00:10:28.491 00:10:28.491 15:03:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65237 00:10:28.491 15:03:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65302 00:10:28.491 15:03:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:10:28.491 15:03:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65303 00:10:28.491 15:03:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:10:28.491 15:03:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:31.779 Initializing NVMe Controllers 00:10:31.779 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:31.779 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:31.779 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:31.779 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:31.779 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:31.779 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:31.779 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:31.779 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:31.779 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:31.779 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:31.779 Initialization complete. Launching workers. 00:10:31.779 ======================================================== 00:10:31.779 Latency(us) 00:10:31.779 Device Information : IOPS MiB/s Average min max 00:10:31.779 PCIE (0000:00:10.0) NSID 1 from core 0: 5063.55 19.78 3157.43 941.72 9307.09 00:10:31.779 PCIE (0000:00:11.0) NSID 1 from core 0: 5063.55 19.78 3159.29 959.57 8632.14 00:10:31.779 PCIE (0000:00:13.0) NSID 1 from core 0: 5063.55 19.78 3159.31 956.06 7826.18 00:10:31.780 PCIE (0000:00:12.0) NSID 1 from core 0: 5063.55 19.78 3159.34 961.70 8436.96 00:10:31.780 PCIE (0000:00:12.0) NSID 2 from core 0: 5063.55 19.78 3159.51 955.78 8951.59 00:10:31.780 PCIE (0000:00:12.0) NSID 3 from core 0: 5068.88 19.80 3156.20 957.34 8713.27 00:10:31.780 ======================================================== 00:10:31.780 Total : 30386.65 118.70 3158.51 941.72 9307.09 00:10:31.780 00:10:31.780 Initializing NVMe Controllers 00:10:31.780 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:31.780 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:31.780 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:31.780 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:31.780 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:10:31.780 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:10:31.780 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:10:31.780 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:10:31.780 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:10:31.780 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:10:31.780 Initialization complete. Launching workers. 00:10:31.780 ======================================================== 00:10:31.780 Latency(us) 00:10:31.780 Device Information : IOPS MiB/s Average min max 00:10:31.780 PCIE (0000:00:10.0) NSID 1 from core 1: 5034.06 19.66 3175.83 988.29 8531.56 00:10:31.780 PCIE (0000:00:11.0) NSID 1 from core 1: 5034.06 19.66 3177.55 1023.07 7892.71 00:10:31.780 PCIE (0000:00:13.0) NSID 1 from core 1: 5034.06 19.66 3177.47 1035.91 7871.64 00:10:31.780 PCIE (0000:00:12.0) NSID 1 from core 1: 5034.06 19.66 3177.34 958.05 8066.50 00:10:31.780 PCIE (0000:00:12.0) NSID 2 from core 1: 5034.06 19.66 3177.23 871.02 8328.33 00:10:31.780 PCIE (0000:00:12.0) NSID 3 from core 1: 5034.06 19.66 3177.14 833.23 8526.18 00:10:31.780 ======================================================== 00:10:31.780 Total : 30204.35 117.99 3177.09 833.23 8531.56 00:10:31.780 00:10:34.314 Initializing NVMe Controllers 00:10:34.314 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:34.314 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:34.314 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:34.314 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:34.314 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:10:34.314 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:10:34.314 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:10:34.314 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:10:34.314 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:10:34.314 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:10:34.314 Initialization complete. Launching workers. 00:10:34.314 ======================================================== 00:10:34.314 Latency(us) 00:10:34.314 Device Information : IOPS MiB/s Average min max 00:10:34.314 PCIE (0000:00:10.0) NSID 1 from core 2: 3216.79 12.57 4972.05 1081.61 14149.29 00:10:34.314 PCIE (0000:00:11.0) NSID 1 from core 2: 3216.79 12.57 4973.01 1103.01 13855.83 00:10:34.314 PCIE (0000:00:13.0) NSID 1 from core 2: 3216.79 12.57 4973.41 1093.35 14528.67 00:10:34.314 PCIE (0000:00:12.0) NSID 1 from core 2: 3216.79 12.57 4973.06 1124.43 18974.28 00:10:34.314 PCIE (0000:00:12.0) NSID 2 from core 2: 3216.79 12.57 4972.96 1092.95 19416.18 00:10:34.314 PCIE (0000:00:12.0) NSID 3 from core 2: 3216.79 12.57 4973.12 1005.67 15167.51 00:10:34.314 ======================================================== 00:10:34.314 Total : 19300.74 75.39 4972.94 1005.67 19416.18 00:10:34.314 00:10:34.314 ************************************ 00:10:34.315 END TEST nvme_multi_secondary 00:10:34.315 ************************************ 00:10:34.315 15:03:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65302 00:10:34.315 15:03:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65303 00:10:34.315 00:10:34.315 real 0m11.066s 00:10:34.315 user 0m18.590s 00:10:34.315 sys 0m1.131s 00:10:34.315 15:03:34 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:34.315 15:03:34 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:10:34.315 15:03:34 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:10:34.315 15:03:34 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:10:34.315 15:03:34 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/64235 ]] 00:10:34.315 15:03:34 nvme -- common/autotest_common.sh@1094 -- # kill 64235 00:10:34.315 15:03:34 nvme -- common/autotest_common.sh@1095 -- # wait 64235 00:10:34.315 [2024-11-20 15:03:34.743714] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65179) is not found. Dropping the request. 00:10:34.315 [2024-11-20 15:03:34.743799] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65179) is not found. Dropping the request. 00:10:34.315 [2024-11-20 15:03:34.743843] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65179) is not found. Dropping the request. 00:10:34.315 [2024-11-20 15:03:34.743871] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65179) is not found. Dropping the request. 00:10:34.315 [2024-11-20 15:03:34.746638] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65179) is not found. Dropping the request. 00:10:34.315 [2024-11-20 15:03:34.746714] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65179) is not found. Dropping the request. 00:10:34.315 [2024-11-20 15:03:34.747003] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65179) is not found. Dropping the request. 00:10:34.315 [2024-11-20 15:03:34.747037] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65179) is not found. Dropping the request. 00:10:34.315 [2024-11-20 15:03:34.749795] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65179) is not found. Dropping the request. 00:10:34.315 [2024-11-20 15:03:34.750030] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65179) is not found. Dropping the request. 00:10:34.315 [2024-11-20 15:03:34.750235] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65179) is not found. Dropping the request. 00:10:34.315 [2024-11-20 15:03:34.750448] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65179) is not found. Dropping the request. 00:10:34.315 [2024-11-20 15:03:34.753254] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65179) is not found. Dropping the request. 00:10:34.315 [2024-11-20 15:03:34.753466] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65179) is not found. Dropping the request. 00:10:34.315 [2024-11-20 15:03:34.753499] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65179) is not found. Dropping the request. 00:10:34.315 [2024-11-20 15:03:34.753529] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65179) is not found. Dropping the request. 00:10:34.315 15:03:34 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:10:34.315 15:03:34 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:10:34.315 15:03:34 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:34.315 15:03:34 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:34.315 15:03:34 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:34.315 15:03:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:34.315 ************************************ 00:10:34.315 START TEST bdev_nvme_reset_stuck_adm_cmd 00:10:34.315 ************************************ 00:10:34.315 15:03:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:34.315 * Looking for test storage... 00:10:34.315 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:34.315 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:34.315 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:10:34.315 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:34.315 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:34.315 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:34.315 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:34.315 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:34.315 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:10:34.315 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:10:34.315 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:10:34.315 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:10:34.315 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:10:34.315 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:10:34.315 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:10:34.315 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:34.315 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:10:34.315 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:10:34.315 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:34.315 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:34.315 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:10:34.315 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:10:34.315 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:34.315 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:10:34.315 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:10:34.315 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:10:34.315 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:10:34.315 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:34.315 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:10:34.315 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:10:34.315 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:34.315 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:34.315 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:10:34.315 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:34.315 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:34.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.315 --rc genhtml_branch_coverage=1 00:10:34.315 --rc genhtml_function_coverage=1 00:10:34.315 --rc genhtml_legend=1 00:10:34.315 --rc geninfo_all_blocks=1 00:10:34.315 --rc geninfo_unexecuted_blocks=1 00:10:34.315 00:10:34.315 ' 00:10:34.315 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:34.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.315 --rc genhtml_branch_coverage=1 00:10:34.315 --rc genhtml_function_coverage=1 00:10:34.315 --rc genhtml_legend=1 00:10:34.315 --rc geninfo_all_blocks=1 00:10:34.315 --rc geninfo_unexecuted_blocks=1 00:10:34.315 00:10:34.315 ' 00:10:34.315 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:34.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.315 --rc genhtml_branch_coverage=1 00:10:34.315 --rc genhtml_function_coverage=1 00:10:34.315 --rc genhtml_legend=1 00:10:34.315 --rc geninfo_all_blocks=1 00:10:34.315 --rc geninfo_unexecuted_blocks=1 00:10:34.315 00:10:34.315 ' 00:10:34.315 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:34.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.315 --rc genhtml_branch_coverage=1 00:10:34.315 --rc genhtml_function_coverage=1 00:10:34.315 --rc genhtml_legend=1 00:10:34.315 --rc geninfo_all_blocks=1 00:10:34.315 --rc geninfo_unexecuted_blocks=1 00:10:34.315 00:10:34.315 ' 00:10:34.315 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:10:34.316 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:10:34.316 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:10:34.316 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:10:34.316 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:10:34.574 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:10:34.574 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:10:34.574 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:10:34.574 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:10:34.574 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:10:34.574 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:34.574 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:10:34.574 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:34.574 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:34.574 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:34.574 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:10:34.574 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:34.574 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:10:34.574 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:10:34.574 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:10:34.574 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65469 00:10:34.574 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:10:34.575 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:34.575 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65469 00:10:34.575 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 65469 ']' 00:10:34.575 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:34.575 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:34.575 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:34.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:34.575 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:34.575 15:03:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:34.575 [2024-11-20 15:03:35.342974] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:10:34.575 [2024-11-20 15:03:35.343402] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65469 ] 00:10:34.838 [2024-11-20 15:03:35.564465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:35.095 [2024-11-20 15:03:35.729403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:35.095 [2024-11-20 15:03:35.729502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:35.095 [2024-11-20 15:03:35.729602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:35.095 [2024-11-20 15:03:35.729938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.027 15:03:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:36.027 15:03:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:10:36.027 15:03:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:10:36.027 15:03:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.027 15:03:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:36.286 nvme0n1 00:10:36.286 15:03:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.286 15:03:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:10:36.286 15:03:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_rpSUn.txt 00:10:36.286 15:03:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:10:36.286 15:03:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.286 15:03:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:36.286 true 00:10:36.286 15:03:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.286 15:03:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:10:36.286 15:03:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732115016 00:10:36.286 15:03:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65502 00:10:36.286 15:03:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:10:36.286 15:03:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:36.286 15:03:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:10:38.184 15:03:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:10:38.184 15:03:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.184 15:03:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:38.184 [2024-11-20 15:03:38.945732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:10:38.184 [2024-11-20 15:03:38.946309] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:10:38.184 [2024-11-20 15:03:38.946453] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:38.184 [2024-11-20 15:03:38.946558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.184 [2024-11-20 15:03:38.948297] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:10:38.184 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65502 00:10:38.184 15:03:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.184 15:03:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65502 00:10:38.184 15:03:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65502 00:10:38.184 15:03:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:10:38.184 15:03:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:10:38.184 15:03:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:10:38.184 15:03:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.185 15:03:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:38.185 15:03:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.185 15:03:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:10:38.185 15:03:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_rpSUn.txt 00:10:38.443 15:03:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:10:38.443 15:03:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:10:38.443 15:03:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:38.443 15:03:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:38.443 15:03:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:38.443 15:03:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:38.443 15:03:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:38.444 15:03:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:38.444 15:03:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:10:38.444 15:03:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:10:38.444 15:03:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:10:38.444 15:03:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:38.444 15:03:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:38.444 15:03:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:38.444 15:03:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:38.444 15:03:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:38.444 15:03:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:38.444 15:03:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:10:38.444 15:03:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:10:38.444 15:03:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_rpSUn.txt 00:10:38.444 15:03:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65469 00:10:38.444 15:03:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 65469 ']' 00:10:38.444 15:03:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 65469 00:10:38.444 15:03:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:10:38.444 15:03:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:38.444 15:03:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65469 00:10:38.444 killing process with pid 65469 00:10:38.444 15:03:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:38.444 15:03:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:38.444 15:03:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65469' 00:10:38.444 15:03:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 65469 00:10:38.444 15:03:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 65469 00:10:41.726 15:03:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:10:41.726 15:03:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:10:41.726 00:10:41.726 real 0m6.893s 00:10:41.726 user 0m24.169s 00:10:41.726 sys 0m0.900s 00:10:41.726 ************************************ 00:10:41.726 END TEST bdev_nvme_reset_stuck_adm_cmd 00:10:41.726 ************************************ 00:10:41.726 15:03:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:41.726 15:03:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:41.726 15:03:41 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:10:41.726 15:03:41 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:10:41.726 15:03:41 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:41.726 15:03:41 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:41.726 15:03:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:41.726 ************************************ 00:10:41.726 START TEST nvme_fio 00:10:41.726 ************************************ 00:10:41.726 15:03:41 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:10:41.726 15:03:41 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:10:41.726 15:03:41 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:10:41.726 15:03:41 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:10:41.726 15:03:41 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:41.726 15:03:41 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:10:41.726 15:03:41 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:41.726 15:03:41 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:41.726 15:03:41 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:41.726 15:03:42 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:10:41.726 15:03:42 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:41.726 15:03:42 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:10:41.726 15:03:42 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:10:41.726 15:03:42 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:41.726 15:03:42 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:41.726 15:03:42 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:41.726 15:03:42 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:41.726 15:03:42 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:41.985 15:03:42 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:41.985 15:03:42 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:41.985 15:03:42 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:41.985 15:03:42 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:10:41.985 15:03:42 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:41.985 15:03:42 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:10:41.985 15:03:42 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:41.985 15:03:42 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:10:41.985 15:03:42 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:10:41.985 15:03:42 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:10:41.985 15:03:42 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:41.985 15:03:42 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:10:41.985 15:03:42 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:10:41.985 15:03:42 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:41.985 15:03:42 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:41.985 15:03:42 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:10:41.985 15:03:42 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:41.986 15:03:42 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:42.244 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:42.244 fio-3.35 00:10:42.244 Starting 1 thread 00:10:46.429 00:10:46.429 test: (groupid=0, jobs=1): err= 0: pid=65660: Wed Nov 20 15:03:46 2024 00:10:46.429 read: IOPS=21.3k, BW=83.1MiB/s (87.1MB/s)(166MiB/2001msec) 00:10:46.429 slat (nsec): min=4263, max=77832, avg=5419.46, stdev=1220.38 00:10:46.429 clat (usec): min=225, max=12514, avg=3004.30, stdev=350.05 00:10:46.429 lat (usec): min=230, max=12592, avg=3009.72, stdev=350.46 00:10:46.429 clat percentiles (usec): 00:10:46.429 | 1.00th=[ 2311], 5.00th=[ 2835], 10.00th=[ 2868], 20.00th=[ 2900], 00:10:46.429 | 30.00th=[ 2933], 40.00th=[ 2933], 50.00th=[ 2966], 60.00th=[ 2999], 00:10:46.429 | 70.00th=[ 2999], 80.00th=[ 3064], 90.00th=[ 3163], 95.00th=[ 3359], 00:10:46.429 | 99.00th=[ 4146], 99.50th=[ 4686], 99.90th=[ 6456], 99.95th=[ 9634], 00:10:46.429 | 99.99th=[12125] 00:10:46.429 bw ( KiB/s): min=83392, max=85048, per=99.07%, avg=84261.33, stdev=831.09, samples=3 00:10:46.429 iops : min=20848, max=21262, avg=21065.33, stdev=207.77, samples=3 00:10:46.429 write: IOPS=21.1k, BW=82.5MiB/s (86.5MB/s)(165MiB/2001msec); 0 zone resets 00:10:46.429 slat (nsec): min=4371, max=41047, avg=5646.90, stdev=1194.96 00:10:46.429 clat (usec): min=197, max=12271, avg=3005.53, stdev=362.33 00:10:46.429 lat (usec): min=202, max=12298, avg=3011.18, stdev=362.73 00:10:46.429 clat percentiles (usec): 00:10:46.429 | 1.00th=[ 2212], 5.00th=[ 2835], 10.00th=[ 2868], 20.00th=[ 2900], 00:10:46.429 | 30.00th=[ 2933], 40.00th=[ 2933], 50.00th=[ 2966], 60.00th=[ 2999], 00:10:46.429 | 70.00th=[ 3032], 80.00th=[ 3064], 90.00th=[ 3163], 95.00th=[ 3359], 00:10:46.429 | 99.00th=[ 4228], 99.50th=[ 4686], 99.90th=[ 7570], 99.95th=[10159], 00:10:46.429 | 99.99th=[11863] 00:10:46.429 bw ( KiB/s): min=83392, max=85360, per=99.87%, avg=84362.67, stdev=984.27, samples=3 00:10:46.429 iops : min=20848, max=21340, avg=21090.67, stdev=246.07, samples=3 00:10:46.429 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:10:46.429 lat (msec) : 2=0.60%, 4=98.05%, 10=1.26%, 20=0.05% 00:10:46.429 cpu : usr=99.30%, sys=0.00%, ctx=4, majf=0, minf=607 00:10:46.429 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:46.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.429 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:46.429 issued rwts: total=42548,42256,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.429 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:46.429 00:10:46.429 Run status group 0 (all jobs): 00:10:46.429 READ: bw=83.1MiB/s (87.1MB/s), 83.1MiB/s-83.1MiB/s (87.1MB/s-87.1MB/s), io=166MiB (174MB), run=2001-2001msec 00:10:46.429 WRITE: bw=82.5MiB/s (86.5MB/s), 82.5MiB/s-82.5MiB/s (86.5MB/s-86.5MB/s), io=165MiB (173MB), run=2001-2001msec 00:10:46.429 ----------------------------------------------------- 00:10:46.429 Suppressions used: 00:10:46.429 count bytes template 00:10:46.429 1 32 /usr/src/fio/parse.c 00:10:46.429 1 8 libtcmalloc_minimal.so 00:10:46.429 ----------------------------------------------------- 00:10:46.429 00:10:46.429 15:03:46 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:46.429 15:03:46 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:46.429 15:03:46 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:46.429 15:03:46 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:46.429 15:03:46 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:46.429 15:03:46 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:46.429 15:03:47 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:46.429 15:03:47 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:46.429 15:03:47 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:46.429 15:03:47 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:10:46.689 15:03:47 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:46.689 15:03:47 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:10:46.689 15:03:47 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:46.689 15:03:47 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:10:46.689 15:03:47 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:10:46.689 15:03:47 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:10:46.689 15:03:47 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:10:46.689 15:03:47 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:46.689 15:03:47 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:10:46.689 15:03:47 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:46.689 15:03:47 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:46.689 15:03:47 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:10:46.689 15:03:47 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:46.689 15:03:47 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:46.689 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:46.689 fio-3.35 00:10:46.689 Starting 1 thread 00:10:50.884 00:10:50.884 test: (groupid=0, jobs=1): err= 0: pid=65727: Wed Nov 20 15:03:50 2024 00:10:50.884 read: IOPS=19.9k, BW=77.8MiB/s (81.6MB/s)(156MiB/2001msec) 00:10:50.884 slat (usec): min=4, max=102, avg= 5.70, stdev= 1.65 00:10:50.884 clat (usec): min=223, max=14592, avg=3196.40, stdev=627.58 00:10:50.884 lat (usec): min=228, max=14667, avg=3202.10, stdev=628.39 00:10:50.884 clat percentiles (usec): 00:10:50.884 | 1.00th=[ 2212], 5.00th=[ 2900], 10.00th=[ 2933], 20.00th=[ 2966], 00:10:50.884 | 30.00th=[ 2999], 40.00th=[ 3032], 50.00th=[ 3064], 60.00th=[ 3097], 00:10:50.884 | 70.00th=[ 3130], 80.00th=[ 3195], 90.00th=[ 3720], 95.00th=[ 4047], 00:10:50.884 | 99.00th=[ 6194], 99.50th=[ 7570], 99.90th=[ 9372], 99.95th=[11469], 00:10:50.884 | 99.99th=[14222] 00:10:50.884 bw ( KiB/s): min=72496, max=82304, per=98.03%, avg=78090.67, stdev=5047.80, samples=3 00:10:50.884 iops : min=18124, max=20576, avg=19522.67, stdev=1261.95, samples=3 00:10:50.884 write: IOPS=19.9k, BW=77.6MiB/s (81.4MB/s)(155MiB/2001msec); 0 zone resets 00:10:50.884 slat (nsec): min=4314, max=48771, avg=5952.78, stdev=1580.54 00:10:50.884 clat (usec): min=203, max=14336, avg=3207.68, stdev=639.84 00:10:50.884 lat (usec): min=209, max=14358, avg=3213.63, stdev=640.63 00:10:50.884 clat percentiles (usec): 00:10:50.884 | 1.00th=[ 2245], 5.00th=[ 2900], 10.00th=[ 2933], 20.00th=[ 2966], 00:10:50.884 | 30.00th=[ 2999], 40.00th=[ 3032], 50.00th=[ 3064], 60.00th=[ 3097], 00:10:50.884 | 70.00th=[ 3130], 80.00th=[ 3195], 90.00th=[ 3785], 95.00th=[ 4080], 00:10:50.884 | 99.00th=[ 6325], 99.50th=[ 7701], 99.90th=[ 9634], 99.95th=[11863], 00:10:50.884 | 99.99th=[13960] 00:10:50.884 bw ( KiB/s): min=72544, max=82696, per=98.45%, avg=78250.67, stdev=5192.21, samples=3 00:10:50.884 iops : min=18136, max=20674, avg=19562.67, stdev=1298.05, samples=3 00:10:50.884 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:10:50.884 lat (msec) : 2=0.53%, 4=92.87%, 10=6.48%, 20=0.08% 00:10:50.884 cpu : usr=99.00%, sys=0.20%, ctx=4, majf=0, minf=608 00:10:50.884 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:50.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.884 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:50.884 issued rwts: total=39848,39761,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:50.884 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:50.884 00:10:50.884 Run status group 0 (all jobs): 00:10:50.884 READ: bw=77.8MiB/s (81.6MB/s), 77.8MiB/s-77.8MiB/s (81.6MB/s-81.6MB/s), io=156MiB (163MB), run=2001-2001msec 00:10:50.884 WRITE: bw=77.6MiB/s (81.4MB/s), 77.6MiB/s-77.6MiB/s (81.4MB/s-81.4MB/s), io=155MiB (163MB), run=2001-2001msec 00:10:50.884 ----------------------------------------------------- 00:10:50.884 Suppressions used: 00:10:50.884 count bytes template 00:10:50.884 1 32 /usr/src/fio/parse.c 00:10:50.884 1 8 libtcmalloc_minimal.so 00:10:50.884 ----------------------------------------------------- 00:10:50.884 00:10:50.884 15:03:51 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:50.884 15:03:51 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:50.884 15:03:51 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:50.884 15:03:51 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:50.884 15:03:51 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:50.884 15:03:51 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:51.143 15:03:51 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:51.143 15:03:51 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:51.143 15:03:51 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:51.143 15:03:51 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:10:51.143 15:03:51 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:51.143 15:03:51 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:10:51.143 15:03:51 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:51.143 15:03:51 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:10:51.144 15:03:51 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:10:51.144 15:03:51 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:10:51.144 15:03:51 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:51.144 15:03:51 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:10:51.144 15:03:51 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:10:51.144 15:03:51 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:51.144 15:03:51 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:51.144 15:03:51 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:10:51.144 15:03:51 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:51.144 15:03:51 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:51.403 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:51.403 fio-3.35 00:10:51.403 Starting 1 thread 00:10:55.595 00:10:55.595 test: (groupid=0, jobs=1): err= 0: pid=65793: Wed Nov 20 15:03:55 2024 00:10:55.595 read: IOPS=20.0k, BW=78.1MiB/s (81.9MB/s)(156MiB/2001msec) 00:10:55.595 slat (nsec): min=4455, max=60884, avg=5666.83, stdev=1662.12 00:10:55.595 clat (usec): min=217, max=10987, avg=3186.65, stdev=721.70 00:10:55.596 lat (usec): min=223, max=11048, avg=3192.32, stdev=722.66 00:10:55.596 clat percentiles (usec): 00:10:55.596 | 1.00th=[ 2180], 5.00th=[ 2802], 10.00th=[ 2868], 20.00th=[ 2900], 00:10:55.596 | 30.00th=[ 2933], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 3032], 00:10:55.596 | 70.00th=[ 3097], 80.00th=[ 3195], 90.00th=[ 3851], 95.00th=[ 4228], 00:10:55.596 | 99.00th=[ 7046], 99.50th=[ 7767], 99.90th=[ 8455], 99.95th=[ 8455], 00:10:55.596 | 99.99th=[10683] 00:10:55.596 bw ( KiB/s): min=76272, max=82352, per=98.47%, avg=78778.67, stdev=3177.25, samples=3 00:10:55.596 iops : min=19068, max=20588, avg=19694.67, stdev=794.31, samples=3 00:10:55.596 write: IOPS=20.0k, BW=78.0MiB/s (81.8MB/s)(156MiB/2001msec); 0 zone resets 00:10:55.596 slat (nsec): min=4591, max=56750, avg=5937.68, stdev=1753.34 00:10:55.596 clat (usec): min=258, max=10833, avg=3188.28, stdev=725.37 00:10:55.596 lat (usec): min=264, max=10855, avg=3194.22, stdev=726.37 00:10:55.596 clat percentiles (usec): 00:10:55.596 | 1.00th=[ 2114], 5.00th=[ 2802], 10.00th=[ 2868], 20.00th=[ 2900], 00:10:55.596 | 30.00th=[ 2933], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 3032], 00:10:55.596 | 70.00th=[ 3097], 80.00th=[ 3195], 90.00th=[ 3851], 95.00th=[ 4228], 00:10:55.596 | 99.00th=[ 7111], 99.50th=[ 7767], 99.90th=[ 8455], 99.95th=[ 8848], 00:10:55.596 | 99.99th=[10552] 00:10:55.596 bw ( KiB/s): min=76280, max=82160, per=98.76%, avg=78858.67, stdev=3005.88, samples=3 00:10:55.596 iops : min=19070, max=20540, avg=19714.67, stdev=751.47, samples=3 00:10:55.596 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.02% 00:10:55.596 lat (msec) : 2=0.64%, 4=90.95%, 10=8.33%, 20=0.02% 00:10:55.596 cpu : usr=99.10%, sys=0.05%, ctx=4, majf=0, minf=607 00:10:55.596 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:55.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.596 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:55.596 issued rwts: total=40019,39942,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:55.596 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:55.596 00:10:55.596 Run status group 0 (all jobs): 00:10:55.596 READ: bw=78.1MiB/s (81.9MB/s), 78.1MiB/s-78.1MiB/s (81.9MB/s-81.9MB/s), io=156MiB (164MB), run=2001-2001msec 00:10:55.596 WRITE: bw=78.0MiB/s (81.8MB/s), 78.0MiB/s-78.0MiB/s (81.8MB/s-81.8MB/s), io=156MiB (164MB), run=2001-2001msec 00:10:55.596 ----------------------------------------------------- 00:10:55.596 Suppressions used: 00:10:55.596 count bytes template 00:10:55.596 1 32 /usr/src/fio/parse.c 00:10:55.596 1 8 libtcmalloc_minimal.so 00:10:55.596 ----------------------------------------------------- 00:10:55.596 00:10:55.596 15:03:55 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:55.596 15:03:55 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:55.596 15:03:55 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:55.596 15:03:55 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:55.596 15:03:56 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:55.596 15:03:56 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:55.855 15:03:56 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:55.855 15:03:56 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:55.855 15:03:56 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:55.855 15:03:56 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:10:55.855 15:03:56 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:55.855 15:03:56 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:10:55.855 15:03:56 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:55.855 15:03:56 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:10:55.855 15:03:56 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:10:55.855 15:03:56 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:10:55.855 15:03:56 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:55.855 15:03:56 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:10:55.855 15:03:56 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:10:55.855 15:03:56 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:55.855 15:03:56 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:55.855 15:03:56 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:10:55.855 15:03:56 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:55.855 15:03:56 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:56.115 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:56.115 fio-3.35 00:10:56.115 Starting 1 thread 00:11:01.385 00:11:01.385 test: (groupid=0, jobs=1): err= 0: pid=65854: Wed Nov 20 15:04:02 2024 00:11:01.385 read: IOPS=21.1k, BW=82.3MiB/s (86.3MB/s)(165MiB/2001msec) 00:11:01.385 slat (nsec): min=4475, max=65880, avg=5513.92, stdev=1319.35 00:11:01.385 clat (usec): min=185, max=11423, avg=3028.12, stdev=474.28 00:11:01.385 lat (usec): min=190, max=11478, avg=3033.64, stdev=474.79 00:11:01.385 clat percentiles (usec): 00:11:01.385 | 1.00th=[ 1975], 5.00th=[ 2704], 10.00th=[ 2835], 20.00th=[ 2900], 00:11:01.385 | 30.00th=[ 2933], 40.00th=[ 2966], 50.00th=[ 2966], 60.00th=[ 2999], 00:11:01.385 | 70.00th=[ 3032], 80.00th=[ 3097], 90.00th=[ 3228], 95.00th=[ 3458], 00:11:01.385 | 99.00th=[ 5080], 99.50th=[ 6063], 99.90th=[ 8586], 99.95th=[ 9110], 00:11:01.385 | 99.99th=[11207] 00:11:01.385 bw ( KiB/s): min=81920, max=86088, per=98.90%, avg=83376.00, stdev=2350.79, samples=3 00:11:01.385 iops : min=20480, max=21522, avg=20844.00, stdev=587.70, samples=3 00:11:01.385 write: IOPS=20.9k, BW=81.8MiB/s (85.8MB/s)(164MiB/2001msec); 0 zone resets 00:11:01.385 slat (nsec): min=4603, max=43219, avg=5778.04, stdev=1274.21 00:11:01.385 clat (usec): min=213, max=11311, avg=3034.33, stdev=496.37 00:11:01.385 lat (usec): min=219, max=11332, avg=3040.10, stdev=496.91 00:11:01.385 clat percentiles (usec): 00:11:01.385 | 1.00th=[ 1975], 5.00th=[ 2671], 10.00th=[ 2835], 20.00th=[ 2900], 00:11:01.385 | 30.00th=[ 2933], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 2999], 00:11:01.385 | 70.00th=[ 3032], 80.00th=[ 3097], 90.00th=[ 3228], 95.00th=[ 3458], 00:11:01.385 | 99.00th=[ 5211], 99.50th=[ 6194], 99.90th=[ 8717], 99.95th=[ 9241], 00:11:01.385 | 99.99th=[10814] 00:11:01.385 bw ( KiB/s): min=81616, max=86184, per=99.55%, avg=83426.67, stdev=2426.68, samples=3 00:11:01.385 iops : min=20404, max=21546, avg=20856.67, stdev=606.67, samples=3 00:11:01.385 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:11:01.385 lat (msec) : 2=1.01%, 4=96.91%, 10=2.01%, 20=0.03% 00:11:01.385 cpu : usr=99.15%, sys=0.15%, ctx=2, majf=0, minf=605 00:11:01.385 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:01.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.385 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:01.385 issued rwts: total=42171,41921,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.385 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:01.385 00:11:01.385 Run status group 0 (all jobs): 00:11:01.385 READ: bw=82.3MiB/s (86.3MB/s), 82.3MiB/s-82.3MiB/s (86.3MB/s-86.3MB/s), io=165MiB (173MB), run=2001-2001msec 00:11:01.385 WRITE: bw=81.8MiB/s (85.8MB/s), 81.8MiB/s-81.8MiB/s (85.8MB/s-85.8MB/s), io=164MiB (172MB), run=2001-2001msec 00:11:01.644 ----------------------------------------------------- 00:11:01.644 Suppressions used: 00:11:01.644 count bytes template 00:11:01.644 1 32 /usr/src/fio/parse.c 00:11:01.644 1 8 libtcmalloc_minimal.so 00:11:01.644 ----------------------------------------------------- 00:11:01.644 00:11:01.644 15:04:02 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:01.644 15:04:02 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:11:01.644 00:11:01.644 real 0m20.384s 00:11:01.644 user 0m16.120s 00:11:01.644 sys 0m3.268s 00:11:01.644 15:04:02 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:01.644 ************************************ 00:11:01.644 END TEST nvme_fio 00:11:01.644 ************************************ 00:11:01.644 15:04:02 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:11:01.644 ************************************ 00:11:01.644 END TEST nvme 00:11:01.644 ************************************ 00:11:01.644 00:11:01.644 real 1m37.160s 00:11:01.644 user 3m47.214s 00:11:01.644 sys 0m23.952s 00:11:01.644 15:04:02 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:01.644 15:04:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:01.644 15:04:02 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:11:01.644 15:04:02 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:01.644 15:04:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:01.644 15:04:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:01.644 15:04:02 -- common/autotest_common.sh@10 -- # set +x 00:11:01.644 ************************************ 00:11:01.644 START TEST nvme_scc 00:11:01.644 ************************************ 00:11:01.644 15:04:02 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:01.903 * Looking for test storage... 00:11:01.903 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:01.903 15:04:02 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:01.903 15:04:02 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:11:01.903 15:04:02 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:01.903 15:04:02 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:01.903 15:04:02 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:01.903 15:04:02 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:01.903 15:04:02 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:01.903 15:04:02 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:11:01.903 15:04:02 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:11:01.903 15:04:02 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:11:01.903 15:04:02 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:11:01.903 15:04:02 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:11:01.903 15:04:02 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:11:01.903 15:04:02 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:11:01.903 15:04:02 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:01.903 15:04:02 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:11:01.903 15:04:02 nvme_scc -- scripts/common.sh@345 -- # : 1 00:11:01.903 15:04:02 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:01.903 15:04:02 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:01.903 15:04:02 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:11:01.903 15:04:02 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:11:01.903 15:04:02 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:01.903 15:04:02 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:11:01.903 15:04:02 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:01.903 15:04:02 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:11:01.903 15:04:02 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:11:01.903 15:04:02 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:01.903 15:04:02 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:11:01.903 15:04:02 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:01.903 15:04:02 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:01.903 15:04:02 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:01.903 15:04:02 nvme_scc -- scripts/common.sh@368 -- # return 0 00:11:01.903 15:04:02 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:01.903 15:04:02 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:01.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.903 --rc genhtml_branch_coverage=1 00:11:01.903 --rc genhtml_function_coverage=1 00:11:01.903 --rc genhtml_legend=1 00:11:01.903 --rc geninfo_all_blocks=1 00:11:01.903 --rc geninfo_unexecuted_blocks=1 00:11:01.903 00:11:01.903 ' 00:11:01.903 15:04:02 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:01.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.903 --rc genhtml_branch_coverage=1 00:11:01.903 --rc genhtml_function_coverage=1 00:11:01.903 --rc genhtml_legend=1 00:11:01.903 --rc geninfo_all_blocks=1 00:11:01.903 --rc geninfo_unexecuted_blocks=1 00:11:01.903 00:11:01.903 ' 00:11:01.903 15:04:02 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:01.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.903 --rc genhtml_branch_coverage=1 00:11:01.903 --rc genhtml_function_coverage=1 00:11:01.903 --rc genhtml_legend=1 00:11:01.903 --rc geninfo_all_blocks=1 00:11:01.903 --rc geninfo_unexecuted_blocks=1 00:11:01.903 00:11:01.903 ' 00:11:01.903 15:04:02 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:01.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.903 --rc genhtml_branch_coverage=1 00:11:01.903 --rc genhtml_function_coverage=1 00:11:01.903 --rc genhtml_legend=1 00:11:01.903 --rc geninfo_all_blocks=1 00:11:01.903 --rc geninfo_unexecuted_blocks=1 00:11:01.903 00:11:01.903 ' 00:11:01.903 15:04:02 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:01.903 15:04:02 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:01.903 15:04:02 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:01.903 15:04:02 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:01.903 15:04:02 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:01.903 15:04:02 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:11:01.903 15:04:02 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:01.903 15:04:02 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:01.903 15:04:02 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:01.903 15:04:02 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.903 15:04:02 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.903 15:04:02 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.903 15:04:02 nvme_scc -- paths/export.sh@5 -- # export PATH 00:11:01.903 15:04:02 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.903 15:04:02 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:11:01.903 15:04:02 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:01.903 15:04:02 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:11:01.903 15:04:02 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:01.903 15:04:02 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:11:01.903 15:04:02 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:01.903 15:04:02 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:01.903 15:04:02 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:01.903 15:04:02 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:11:01.903 15:04:02 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:01.903 15:04:02 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:11:01.903 15:04:02 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:11:01.903 15:04:02 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:11:01.903 15:04:02 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:02.471 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:02.729 Waiting for block devices as requested 00:11:02.987 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:02.987 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:03.244 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:03.244 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:08.520 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:08.520 15:04:09 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:11:08.520 15:04:09 nvme_scc -- scripts/common.sh@18 -- # local i 00:11:08.520 15:04:09 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:11:08.520 15:04:09 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:08.520 15:04:09 nvme_scc -- scripts/common.sh@27 -- # return 0 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:08.520 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:08.521 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.522 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.523 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:11:08.524 15:04:09 nvme_scc -- scripts/common.sh@18 -- # local i 00:11:08.524 15:04:09 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:11:08.524 15:04:09 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:08.524 15:04:09 nvme_scc -- scripts/common.sh@27 -- # return 0 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.524 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.525 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:08.526 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.527 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.794 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:08.794 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:08.794 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.794 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.794 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.794 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:08.794 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:08.794 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.794 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.794 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.794 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:08.794 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:08.794 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.794 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.794 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:08.794 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:08.794 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:08.794 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.794 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.794 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:08.794 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:08.794 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:08.794 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.794 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.794 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:08.794 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:08.794 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:08.794 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.794 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.794 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:08.794 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:08.794 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:08.794 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.794 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.794 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:08.794 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:08.794 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:08.794 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.794 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.794 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:08.794 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:08.794 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:08.794 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.794 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.794 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:08.794 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:08.794 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:08.794 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.794 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.794 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:08.794 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:08.794 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:08.794 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.794 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.794 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:08.794 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:08.794 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:08.794 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.794 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.794 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:08.794 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:11:08.795 15:04:09 nvme_scc -- scripts/common.sh@18 -- # local i 00:11:08.795 15:04:09 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:11:08.795 15:04:09 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:08.795 15:04:09 nvme_scc -- scripts/common.sh@27 -- # return 0 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:08.795 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:08.796 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.797 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:08.798 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.799 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:11:08.800 15:04:09 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:11:08.801 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:08.802 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:08.803 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.804 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:11:09.065 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:11:09.065 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.065 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.066 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:11:09.067 15:04:09 nvme_scc -- scripts/common.sh@18 -- # local i 00:11:09.067 15:04:09 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:11:09.067 15:04:09 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:09.067 15:04:09 nvme_scc -- scripts/common.sh@27 -- # return 0 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.067 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:09.068 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:09.069 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:09.070 15:04:09 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:11:09.070 15:04:09 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:11:09.070 15:04:09 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:11:09.070 15:04:09 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:11:09.070 15:04:09 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:09.638 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:10.573 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:10.573 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:10.573 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:10.573 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:10.830 15:04:11 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:11:10.830 15:04:11 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:10.830 15:04:11 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:10.830 15:04:11 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:11:10.830 ************************************ 00:11:10.830 START TEST nvme_simple_copy 00:11:10.830 ************************************ 00:11:10.830 15:04:11 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:11:11.088 Initializing NVMe Controllers 00:11:11.088 Attaching to 0000:00:10.0 00:11:11.088 Controller supports SCC. Attached to 0000:00:10.0 00:11:11.088 Namespace ID: 1 size: 6GB 00:11:11.088 Initialization complete. 00:11:11.088 00:11:11.088 Controller QEMU NVMe Ctrl (12340 ) 00:11:11.088 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:11:11.088 Namespace Block Size:4096 00:11:11.088 Writing LBAs 0 to 63 with Random Data 00:11:11.088 Copied LBAs from 0 - 63 to the Destination LBA 256 00:11:11.088 LBAs matching Written Data: 64 00:11:11.088 00:11:11.088 real 0m0.331s 00:11:11.088 user 0m0.112s 00:11:11.088 sys 0m0.117s 00:11:11.088 15:04:11 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:11.088 ************************************ 00:11:11.088 END TEST nvme_simple_copy 00:11:11.088 ************************************ 00:11:11.088 15:04:11 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:11:11.088 ************************************ 00:11:11.088 END TEST nvme_scc 00:11:11.088 ************************************ 00:11:11.088 00:11:11.088 real 0m9.372s 00:11:11.088 user 0m1.742s 00:11:11.088 sys 0m2.657s 00:11:11.088 15:04:11 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:11.088 15:04:11 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:11:11.088 15:04:11 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:11:11.088 15:04:11 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:11:11.088 15:04:11 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:11:11.088 15:04:11 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:11:11.088 15:04:11 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:11:11.088 15:04:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:11.088 15:04:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:11.088 15:04:11 -- common/autotest_common.sh@10 -- # set +x 00:11:11.088 ************************************ 00:11:11.088 START TEST nvme_fdp 00:11:11.088 ************************************ 00:11:11.088 15:04:11 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:11:11.345 * Looking for test storage... 00:11:11.345 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:11.345 15:04:12 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:11.345 15:04:12 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:11.345 15:04:12 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:11:11.345 15:04:12 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:11.345 15:04:12 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:11.345 15:04:12 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:11.345 15:04:12 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:11.345 15:04:12 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:11:11.345 15:04:12 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:11:11.345 15:04:12 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:11:11.345 15:04:12 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:11:11.345 15:04:12 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:11:11.345 15:04:12 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:11:11.345 15:04:12 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:11:11.346 15:04:12 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:11.346 15:04:12 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:11:11.346 15:04:12 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:11:11.346 15:04:12 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:11.346 15:04:12 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:11.346 15:04:12 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:11:11.346 15:04:12 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:11:11.346 15:04:12 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:11.346 15:04:12 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:11:11.346 15:04:12 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:11:11.346 15:04:12 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:11:11.346 15:04:12 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:11:11.346 15:04:12 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:11.346 15:04:12 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:11:11.346 15:04:12 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:11:11.346 15:04:12 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:11.346 15:04:12 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:11.346 15:04:12 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:11:11.346 15:04:12 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:11.346 15:04:12 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:11.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.346 --rc genhtml_branch_coverage=1 00:11:11.346 --rc genhtml_function_coverage=1 00:11:11.346 --rc genhtml_legend=1 00:11:11.346 --rc geninfo_all_blocks=1 00:11:11.346 --rc geninfo_unexecuted_blocks=1 00:11:11.346 00:11:11.346 ' 00:11:11.346 15:04:12 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:11.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.346 --rc genhtml_branch_coverage=1 00:11:11.346 --rc genhtml_function_coverage=1 00:11:11.346 --rc genhtml_legend=1 00:11:11.346 --rc geninfo_all_blocks=1 00:11:11.346 --rc geninfo_unexecuted_blocks=1 00:11:11.346 00:11:11.346 ' 00:11:11.346 15:04:12 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:11.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.346 --rc genhtml_branch_coverage=1 00:11:11.346 --rc genhtml_function_coverage=1 00:11:11.346 --rc genhtml_legend=1 00:11:11.346 --rc geninfo_all_blocks=1 00:11:11.346 --rc geninfo_unexecuted_blocks=1 00:11:11.346 00:11:11.346 ' 00:11:11.346 15:04:12 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:11.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.346 --rc genhtml_branch_coverage=1 00:11:11.346 --rc genhtml_function_coverage=1 00:11:11.346 --rc genhtml_legend=1 00:11:11.346 --rc geninfo_all_blocks=1 00:11:11.346 --rc geninfo_unexecuted_blocks=1 00:11:11.346 00:11:11.346 ' 00:11:11.346 15:04:12 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:11.346 15:04:12 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:11.346 15:04:12 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:11.346 15:04:12 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:11.346 15:04:12 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:11.346 15:04:12 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:11:11.346 15:04:12 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:11.346 15:04:12 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:11.346 15:04:12 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:11.346 15:04:12 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.346 15:04:12 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.346 15:04:12 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.346 15:04:12 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:11:11.346 15:04:12 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.346 15:04:12 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:11:11.346 15:04:12 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:11.346 15:04:12 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:11:11.346 15:04:12 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:11.346 15:04:12 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:11:11.346 15:04:12 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:11.346 15:04:12 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:11.346 15:04:12 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:11.346 15:04:12 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:11:11.346 15:04:12 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:11.346 15:04:12 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:11.912 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:12.170 Waiting for block devices as requested 00:11:12.429 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:12.429 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:12.429 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:12.688 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:17.963 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:17.963 15:04:18 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:11:17.963 15:04:18 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:17.963 15:04:18 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:17.963 15:04:18 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:17.963 15:04:18 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:11:17.963 15:04:18 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:11:17.963 15:04:18 nvme_fdp -- scripts/common.sh@18 -- # local i 00:11:17.963 15:04:18 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:11:17.963 15:04:18 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:17.963 15:04:18 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:11:17.963 15:04:18 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:17.963 15:04:18 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:17.963 15:04:18 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:17.963 15:04:18 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:17.963 15:04:18 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:17.963 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.964 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.965 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.966 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.967 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.968 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.969 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:11:17.970 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:17.971 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:11:17.972 15:04:18 nvme_fdp -- scripts/common.sh@18 -- # local i 00:11:17.972 15:04:18 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:11:17.972 15:04:18 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:17.972 15:04:18 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:17.972 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.973 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.974 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.975 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:17.976 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.977 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:17.978 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.979 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.980 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:11:17.981 15:04:18 nvme_fdp -- scripts/common.sh@18 -- # local i 00:11:17.981 15:04:18 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:11:17.981 15:04:18 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:17.981 15:04:18 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:17.981 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.982 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:17.983 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.984 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.985 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:11:17.986 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.987 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:17.988 15:04:18 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.254 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.255 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:18.256 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:18.257 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:11:18.258 15:04:18 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:18.259 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:11:18.260 15:04:18 nvme_fdp -- scripts/common.sh@18 -- # local i 00:11:18.260 15:04:18 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:11:18.260 15:04:18 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:18.260 15:04:18 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:18.260 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.261 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:18.262 15:04:18 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:18.262 15:04:18 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:18.262 15:04:19 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:11:18.262 15:04:19 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:11:18.262 15:04:19 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:11:18.262 15:04:19 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:11:18.262 15:04:19 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:11:18.262 15:04:19 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:18.262 15:04:19 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:11:18.262 15:04:19 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:11:18.262 15:04:19 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:11:18.262 15:04:19 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:11:18.262 15:04:19 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:11:18.262 15:04:19 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:11:18.262 15:04:19 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:18.262 15:04:19 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:18.262 15:04:19 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:18.262 15:04:19 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:18.262 15:04:19 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:11:18.262 15:04:19 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:11:18.262 15:04:19 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:11:18.262 15:04:19 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:11:18.262 15:04:19 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:11:18.262 15:04:19 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:11:18.262 15:04:19 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:11:18.262 15:04:19 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:19.195 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:19.761 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:19.761 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:19.761 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:19.761 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:20.019 15:04:20 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:11:20.019 15:04:20 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:20.019 15:04:20 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:20.019 15:04:20 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:11:20.019 ************************************ 00:11:20.019 START TEST nvme_flexible_data_placement 00:11:20.019 ************************************ 00:11:20.019 15:04:20 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:11:20.278 Initializing NVMe Controllers 00:11:20.278 Attaching to 0000:00:13.0 00:11:20.278 Controller supports FDP Attached to 0000:00:13.0 00:11:20.278 Namespace ID: 1 Endurance Group ID: 1 00:11:20.278 Initialization complete. 00:11:20.278 00:11:20.278 ================================== 00:11:20.278 == FDP tests for Namespace: #01 == 00:11:20.278 ================================== 00:11:20.278 00:11:20.278 Get Feature: FDP: 00:11:20.278 ================= 00:11:20.278 Enabled: Yes 00:11:20.278 FDP configuration Index: 0 00:11:20.278 00:11:20.278 FDP configurations log page 00:11:20.278 =========================== 00:11:20.278 Number of FDP configurations: 1 00:11:20.278 Version: 0 00:11:20.278 Size: 112 00:11:20.278 FDP Configuration Descriptor: 0 00:11:20.278 Descriptor Size: 96 00:11:20.278 Reclaim Group Identifier format: 2 00:11:20.278 FDP Volatile Write Cache: Not Present 00:11:20.278 FDP Configuration: Valid 00:11:20.278 Vendor Specific Size: 0 00:11:20.278 Number of Reclaim Groups: 2 00:11:20.278 Number of Recalim Unit Handles: 8 00:11:20.278 Max Placement Identifiers: 128 00:11:20.278 Number of Namespaces Suppprted: 256 00:11:20.278 Reclaim unit Nominal Size: 6000000 bytes 00:11:20.278 Estimated Reclaim Unit Time Limit: Not Reported 00:11:20.278 RUH Desc #000: RUH Type: Initially Isolated 00:11:20.278 RUH Desc #001: RUH Type: Initially Isolated 00:11:20.278 RUH Desc #002: RUH Type: Initially Isolated 00:11:20.278 RUH Desc #003: RUH Type: Initially Isolated 00:11:20.278 RUH Desc #004: RUH Type: Initially Isolated 00:11:20.278 RUH Desc #005: RUH Type: Initially Isolated 00:11:20.278 RUH Desc #006: RUH Type: Initially Isolated 00:11:20.278 RUH Desc #007: RUH Type: Initially Isolated 00:11:20.278 00:11:20.279 FDP reclaim unit handle usage log page 00:11:20.279 ====================================== 00:11:20.279 Number of Reclaim Unit Handles: 8 00:11:20.279 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:20.279 RUH Usage Desc #001: RUH Attributes: Unused 00:11:20.279 RUH Usage Desc #002: RUH Attributes: Unused 00:11:20.279 RUH Usage Desc #003: RUH Attributes: Unused 00:11:20.279 RUH Usage Desc #004: RUH Attributes: Unused 00:11:20.279 RUH Usage Desc #005: RUH Attributes: Unused 00:11:20.279 RUH Usage Desc #006: RUH Attributes: Unused 00:11:20.279 RUH Usage Desc #007: RUH Attributes: Unused 00:11:20.279 00:11:20.279 FDP statistics log page 00:11:20.279 ======================= 00:11:20.279 Host bytes with metadata written: 846446592 00:11:20.279 Media bytes with metadata written: 846536704 00:11:20.279 Media bytes erased: 0 00:11:20.279 00:11:20.279 FDP Reclaim unit handle status 00:11:20.279 ============================== 00:11:20.279 Number of RUHS descriptors: 2 00:11:20.279 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x00000000000038c4 00:11:20.279 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:11:20.279 00:11:20.279 FDP write on placement id: 0 success 00:11:20.279 00:11:20.279 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:11:20.279 00:11:20.279 IO mgmt send: RUH update for Placement ID: #0 Success 00:11:20.279 00:11:20.279 Get Feature: FDP Events for Placement handle: #0 00:11:20.279 ======================== 00:11:20.279 Number of FDP Events: 6 00:11:20.279 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:11:20.279 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:11:20.279 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:11:20.279 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:11:20.279 FDP Event: #4 Type: Media Reallocated Enabled: No 00:11:20.279 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:11:20.279 00:11:20.279 FDP events log page 00:11:20.279 =================== 00:11:20.279 Number of FDP events: 1 00:11:20.279 FDP Event #0: 00:11:20.279 Event Type: RU Not Written to Capacity 00:11:20.279 Placement Identifier: Valid 00:11:20.279 NSID: Valid 00:11:20.279 Location: Valid 00:11:20.279 Placement Identifier: 0 00:11:20.279 Event Timestamp: 9 00:11:20.279 Namespace Identifier: 1 00:11:20.279 Reclaim Group Identifier: 0 00:11:20.279 Reclaim Unit Handle Identifier: 0 00:11:20.279 00:11:20.279 FDP test passed 00:11:20.279 00:11:20.279 real 0m0.337s 00:11:20.279 user 0m0.107s 00:11:20.279 sys 0m0.128s 00:11:20.279 15:04:20 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:20.279 15:04:20 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:11:20.279 ************************************ 00:11:20.279 END TEST nvme_flexible_data_placement 00:11:20.279 ************************************ 00:11:20.279 00:11:20.279 real 0m9.158s 00:11:20.279 user 0m1.695s 00:11:20.279 sys 0m2.572s 00:11:20.279 15:04:21 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:20.279 15:04:21 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:11:20.279 ************************************ 00:11:20.279 END TEST nvme_fdp 00:11:20.279 ************************************ 00:11:20.279 15:04:21 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:11:20.279 15:04:21 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:20.279 15:04:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:20.279 15:04:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:20.279 15:04:21 -- common/autotest_common.sh@10 -- # set +x 00:11:20.538 ************************************ 00:11:20.538 START TEST nvme_rpc 00:11:20.538 ************************************ 00:11:20.538 15:04:21 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:20.538 * Looking for test storage... 00:11:20.538 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:20.538 15:04:21 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:20.538 15:04:21 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:11:20.538 15:04:21 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:20.538 15:04:21 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:20.538 15:04:21 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:20.538 15:04:21 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:20.538 15:04:21 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:20.538 15:04:21 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:20.538 15:04:21 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:20.538 15:04:21 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:20.538 15:04:21 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:20.538 15:04:21 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:20.538 15:04:21 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:20.538 15:04:21 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:20.538 15:04:21 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:20.538 15:04:21 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:20.538 15:04:21 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:11:20.538 15:04:21 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:20.538 15:04:21 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:20.538 15:04:21 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:20.538 15:04:21 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:11:20.538 15:04:21 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:20.538 15:04:21 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:11:20.538 15:04:21 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:20.538 15:04:21 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:20.538 15:04:21 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:11:20.538 15:04:21 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:20.538 15:04:21 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:11:20.538 15:04:21 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:20.538 15:04:21 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:20.538 15:04:21 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:20.538 15:04:21 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:11:20.538 15:04:21 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:20.538 15:04:21 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:20.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.538 --rc genhtml_branch_coverage=1 00:11:20.538 --rc genhtml_function_coverage=1 00:11:20.538 --rc genhtml_legend=1 00:11:20.538 --rc geninfo_all_blocks=1 00:11:20.538 --rc geninfo_unexecuted_blocks=1 00:11:20.538 00:11:20.538 ' 00:11:20.538 15:04:21 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:20.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.538 --rc genhtml_branch_coverage=1 00:11:20.538 --rc genhtml_function_coverage=1 00:11:20.538 --rc genhtml_legend=1 00:11:20.538 --rc geninfo_all_blocks=1 00:11:20.538 --rc geninfo_unexecuted_blocks=1 00:11:20.538 00:11:20.538 ' 00:11:20.538 15:04:21 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:20.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.538 --rc genhtml_branch_coverage=1 00:11:20.538 --rc genhtml_function_coverage=1 00:11:20.538 --rc genhtml_legend=1 00:11:20.538 --rc geninfo_all_blocks=1 00:11:20.538 --rc geninfo_unexecuted_blocks=1 00:11:20.538 00:11:20.538 ' 00:11:20.538 15:04:21 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:20.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.538 --rc genhtml_branch_coverage=1 00:11:20.538 --rc genhtml_function_coverage=1 00:11:20.538 --rc genhtml_legend=1 00:11:20.538 --rc geninfo_all_blocks=1 00:11:20.538 --rc geninfo_unexecuted_blocks=1 00:11:20.538 00:11:20.538 ' 00:11:20.538 15:04:21 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:20.538 15:04:21 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:11:20.538 15:04:21 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:11:20.538 15:04:21 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:11:20.538 15:04:21 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:11:20.538 15:04:21 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:11:20.538 15:04:21 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:20.538 15:04:21 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:11:20.538 15:04:21 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:20.538 15:04:21 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:20.538 15:04:21 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:20.797 15:04:21 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:11:20.797 15:04:21 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:20.797 15:04:21 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:11:20.797 15:04:21 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:11:20.797 15:04:21 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:20.797 15:04:21 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67265 00:11:20.797 15:04:21 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:11:20.797 15:04:21 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67265 00:11:20.797 15:04:21 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 67265 ']' 00:11:20.797 15:04:21 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:20.797 15:04:21 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:20.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:20.797 15:04:21 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:20.797 15:04:21 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:20.797 15:04:21 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.797 [2024-11-20 15:04:21.580467] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:11:20.797 [2024-11-20 15:04:21.580627] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67265 ] 00:11:21.056 [2024-11-20 15:04:21.768414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:21.314 [2024-11-20 15:04:21.915976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.314 [2024-11-20 15:04:21.916012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:22.250 15:04:22 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:22.250 15:04:22 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:22.250 15:04:22 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:11:22.508 Nvme0n1 00:11:22.508 15:04:23 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:11:22.508 15:04:23 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:11:22.766 request: 00:11:22.766 { 00:11:22.766 "bdev_name": "Nvme0n1", 00:11:22.766 "filename": "non_existing_file", 00:11:22.766 "method": "bdev_nvme_apply_firmware", 00:11:22.766 "req_id": 1 00:11:22.766 } 00:11:22.766 Got JSON-RPC error response 00:11:22.766 response: 00:11:22.766 { 00:11:22.766 "code": -32603, 00:11:22.766 "message": "open file failed." 00:11:22.766 } 00:11:22.766 15:04:23 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:11:22.766 15:04:23 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:11:22.766 15:04:23 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:11:23.024 15:04:23 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:23.024 15:04:23 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67265 00:11:23.024 15:04:23 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 67265 ']' 00:11:23.024 15:04:23 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 67265 00:11:23.024 15:04:23 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:11:23.024 15:04:23 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:23.024 15:04:23 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67265 00:11:23.024 15:04:23 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:23.024 15:04:23 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:23.024 killing process with pid 67265 00:11:23.024 15:04:23 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67265' 00:11:23.024 15:04:23 nvme_rpc -- common/autotest_common.sh@973 -- # kill 67265 00:11:23.024 15:04:23 nvme_rpc -- common/autotest_common.sh@978 -- # wait 67265 00:11:26.308 00:11:26.308 real 0m5.300s 00:11:26.308 user 0m9.642s 00:11:26.308 sys 0m0.989s 00:11:26.308 15:04:26 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:26.308 15:04:26 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.308 ************************************ 00:11:26.308 END TEST nvme_rpc 00:11:26.308 ************************************ 00:11:26.308 15:04:26 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:26.308 15:04:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:26.308 15:04:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:26.308 15:04:26 -- common/autotest_common.sh@10 -- # set +x 00:11:26.308 ************************************ 00:11:26.308 START TEST nvme_rpc_timeouts 00:11:26.308 ************************************ 00:11:26.308 15:04:26 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:26.308 * Looking for test storage... 00:11:26.308 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:26.308 15:04:26 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:26.308 15:04:26 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:26.308 15:04:26 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:11:26.308 15:04:26 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:26.308 15:04:26 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:26.308 15:04:26 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:26.308 15:04:26 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:26.308 15:04:26 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:11:26.308 15:04:26 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:11:26.308 15:04:26 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:11:26.308 15:04:26 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:11:26.308 15:04:26 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:11:26.308 15:04:26 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:11:26.308 15:04:26 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:11:26.308 15:04:26 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:26.308 15:04:26 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:11:26.308 15:04:26 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:11:26.308 15:04:26 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:26.308 15:04:26 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:26.308 15:04:26 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:11:26.308 15:04:26 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:11:26.308 15:04:26 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:26.308 15:04:26 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:11:26.308 15:04:26 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:11:26.308 15:04:26 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:11:26.308 15:04:26 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:11:26.308 15:04:26 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:26.308 15:04:26 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:11:26.308 15:04:26 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:11:26.308 15:04:26 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:26.308 15:04:26 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:26.308 15:04:26 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:11:26.308 15:04:26 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:26.308 15:04:26 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:26.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.308 --rc genhtml_branch_coverage=1 00:11:26.308 --rc genhtml_function_coverage=1 00:11:26.308 --rc genhtml_legend=1 00:11:26.308 --rc geninfo_all_blocks=1 00:11:26.308 --rc geninfo_unexecuted_blocks=1 00:11:26.308 00:11:26.308 ' 00:11:26.308 15:04:26 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:26.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.308 --rc genhtml_branch_coverage=1 00:11:26.308 --rc genhtml_function_coverage=1 00:11:26.308 --rc genhtml_legend=1 00:11:26.308 --rc geninfo_all_blocks=1 00:11:26.308 --rc geninfo_unexecuted_blocks=1 00:11:26.308 00:11:26.308 ' 00:11:26.308 15:04:26 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:26.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.308 --rc genhtml_branch_coverage=1 00:11:26.308 --rc genhtml_function_coverage=1 00:11:26.308 --rc genhtml_legend=1 00:11:26.308 --rc geninfo_all_blocks=1 00:11:26.308 --rc geninfo_unexecuted_blocks=1 00:11:26.308 00:11:26.308 ' 00:11:26.308 15:04:26 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:26.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.308 --rc genhtml_branch_coverage=1 00:11:26.309 --rc genhtml_function_coverage=1 00:11:26.309 --rc genhtml_legend=1 00:11:26.309 --rc geninfo_all_blocks=1 00:11:26.309 --rc geninfo_unexecuted_blocks=1 00:11:26.309 00:11:26.309 ' 00:11:26.309 15:04:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:26.309 15:04:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67352 00:11:26.309 15:04:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67352 00:11:26.309 15:04:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67384 00:11:26.309 15:04:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:26.309 15:04:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:11:26.309 15:04:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67384 00:11:26.309 15:04:26 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 67384 ']' 00:11:26.309 15:04:26 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.309 15:04:26 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:26.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.309 15:04:26 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.309 15:04:26 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:26.309 15:04:26 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:11:26.309 [2024-11-20 15:04:26.840738] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:11:26.309 [2024-11-20 15:04:26.840894] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67384 ] 00:11:26.309 [2024-11-20 15:04:27.032458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:26.567 [2024-11-20 15:04:27.184838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.567 [2024-11-20 15:04:27.184878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:27.503 15:04:28 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:27.503 Checking default timeout settings: 00:11:27.503 15:04:28 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:11:27.503 15:04:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:11:27.503 15:04:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:27.762 Making settings changes with rpc: 00:11:27.762 15:04:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:11:27.762 15:04:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:11:28.022 Check default vs. modified settings: 00:11:28.022 15:04:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:11:28.022 15:04:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:28.591 15:04:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:11:28.591 15:04:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:28.591 15:04:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67352 00:11:28.591 15:04:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:28.591 15:04:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:28.591 15:04:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:11:28.591 15:04:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67352 00:11:28.591 15:04:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:28.591 15:04:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:28.591 15:04:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:11:28.591 Setting action_on_timeout is changed as expected. 00:11:28.591 15:04:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:11:28.591 15:04:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:11:28.591 15:04:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:28.591 15:04:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67352 00:11:28.591 15:04:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:28.591 15:04:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:28.591 15:04:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:28.591 15:04:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:28.591 15:04:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67352 00:11:28.591 15:04:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:28.591 Setting timeout_us is changed as expected. 00:11:28.591 15:04:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:11:28.591 15:04:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:11:28.591 15:04:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:11:28.591 15:04:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:28.591 15:04:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67352 00:11:28.591 15:04:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:28.591 15:04:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:28.591 15:04:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:28.591 15:04:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:28.591 15:04:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:28.591 15:04:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67352 00:11:28.591 15:04:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:11:28.591 15:04:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:11:28.591 15:04:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:11:28.591 Setting timeout_admin_us is changed as expected. 00:11:28.591 15:04:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:11:28.591 15:04:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67352 /tmp/settings_modified_67352 00:11:28.591 15:04:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67384 00:11:28.591 15:04:29 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 67384 ']' 00:11:28.591 15:04:29 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 67384 00:11:28.591 15:04:29 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:11:28.591 15:04:29 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:28.591 15:04:29 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67384 00:11:28.591 killing process with pid 67384 00:11:28.591 15:04:29 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:28.591 15:04:29 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:28.591 15:04:29 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67384' 00:11:28.591 15:04:29 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 67384 00:11:28.591 15:04:29 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 67384 00:11:31.123 RPC TIMEOUT SETTING TEST PASSED. 00:11:31.123 15:04:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:11:31.123 00:11:31.123 real 0m5.263s 00:11:31.123 user 0m9.897s 00:11:31.123 sys 0m0.942s 00:11:31.123 15:04:31 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:31.123 15:04:31 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:11:31.123 ************************************ 00:11:31.123 END TEST nvme_rpc_timeouts 00:11:31.123 ************************************ 00:11:31.123 15:04:31 -- spdk/autotest.sh@239 -- # uname -s 00:11:31.123 15:04:31 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:11:31.123 15:04:31 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:11:31.123 15:04:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:31.123 15:04:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:31.123 15:04:31 -- common/autotest_common.sh@10 -- # set +x 00:11:31.123 ************************************ 00:11:31.123 START TEST sw_hotplug 00:11:31.123 ************************************ 00:11:31.123 15:04:31 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:11:31.123 * Looking for test storage... 00:11:31.123 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:31.123 15:04:31 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:31.123 15:04:31 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:11:31.123 15:04:31 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:31.382 15:04:32 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:31.382 15:04:32 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:31.382 15:04:32 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:31.382 15:04:32 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:31.382 15:04:32 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:11:31.382 15:04:32 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:11:31.382 15:04:32 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:11:31.382 15:04:32 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:11:31.382 15:04:32 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:11:31.382 15:04:32 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:11:31.382 15:04:32 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:11:31.382 15:04:32 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:31.382 15:04:32 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:11:31.382 15:04:32 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:11:31.382 15:04:32 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:31.382 15:04:32 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:31.382 15:04:32 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:11:31.382 15:04:32 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:11:31.382 15:04:32 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:31.382 15:04:32 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:11:31.382 15:04:32 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:11:31.382 15:04:32 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:11:31.382 15:04:32 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:11:31.382 15:04:32 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:31.382 15:04:32 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:11:31.382 15:04:32 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:11:31.382 15:04:32 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:31.382 15:04:32 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:31.382 15:04:32 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:11:31.382 15:04:32 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:31.382 15:04:32 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:31.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.382 --rc genhtml_branch_coverage=1 00:11:31.382 --rc genhtml_function_coverage=1 00:11:31.382 --rc genhtml_legend=1 00:11:31.382 --rc geninfo_all_blocks=1 00:11:31.382 --rc geninfo_unexecuted_blocks=1 00:11:31.382 00:11:31.382 ' 00:11:31.382 15:04:32 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:31.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.382 --rc genhtml_branch_coverage=1 00:11:31.382 --rc genhtml_function_coverage=1 00:11:31.382 --rc genhtml_legend=1 00:11:31.382 --rc geninfo_all_blocks=1 00:11:31.382 --rc geninfo_unexecuted_blocks=1 00:11:31.382 00:11:31.382 ' 00:11:31.382 15:04:32 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:31.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.382 --rc genhtml_branch_coverage=1 00:11:31.382 --rc genhtml_function_coverage=1 00:11:31.382 --rc genhtml_legend=1 00:11:31.382 --rc geninfo_all_blocks=1 00:11:31.382 --rc geninfo_unexecuted_blocks=1 00:11:31.382 00:11:31.382 ' 00:11:31.382 15:04:32 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:31.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.382 --rc genhtml_branch_coverage=1 00:11:31.382 --rc genhtml_function_coverage=1 00:11:31.382 --rc genhtml_legend=1 00:11:31.382 --rc geninfo_all_blocks=1 00:11:31.382 --rc geninfo_unexecuted_blocks=1 00:11:31.382 00:11:31.382 ' 00:11:31.382 15:04:32 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:31.948 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:31.948 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:31.948 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:31.948 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:31.948 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:32.216 15:04:32 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:11:32.216 15:04:32 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:11:32.216 15:04:32 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:11:32.216 15:04:32 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@233 -- # local class 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@18 -- # local i 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@18 -- # local i 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@18 -- # local i 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@18 -- # local i 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:11:32.216 15:04:32 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:11:32.217 15:04:32 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:11:32.217 15:04:32 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:11:32.217 15:04:32 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:11:32.217 15:04:32 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:11:32.217 15:04:32 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:11:32.217 15:04:32 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:32.217 15:04:32 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:11:32.217 15:04:32 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:11:32.217 15:04:32 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:32.782 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:33.041 Waiting for block devices as requested 00:11:33.041 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:33.299 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:33.299 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:33.299 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:38.579 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:38.579 15:04:39 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:11:38.580 15:04:39 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:39.148 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:11:39.148 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:39.148 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:11:39.718 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:11:39.977 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:39.977 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:39.977 15:04:40 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:11:39.977 15:04:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:39.977 15:04:40 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:11:39.977 15:04:40 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:11:39.977 15:04:40 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68278 00:11:39.977 15:04:40 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:11:40.236 15:04:40 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:11:40.236 15:04:40 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:40.236 15:04:40 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:11:40.236 15:04:40 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:40.236 15:04:40 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:40.236 15:04:40 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:40.236 15:04:40 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:40.236 15:04:40 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:11:40.236 15:04:40 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:40.236 15:04:40 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:40.236 15:04:40 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:11:40.236 15:04:40 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:40.236 15:04:40 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:40.495 Initializing NVMe Controllers 00:11:40.495 Attaching to 0000:00:10.0 00:11:40.495 Attaching to 0000:00:11.0 00:11:40.495 Attached to 0000:00:10.0 00:11:40.495 Attached to 0000:00:11.0 00:11:40.495 Initialization complete. Starting I/O... 00:11:40.495 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:11:40.495 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:11:40.495 00:11:41.433 QEMU NVMe Ctrl (12340 ): 1560 I/Os completed (+1560) 00:11:41.433 QEMU NVMe Ctrl (12341 ): 1560 I/Os completed (+1560) 00:11:41.433 00:11:42.370 QEMU NVMe Ctrl (12340 ): 3672 I/Os completed (+2112) 00:11:42.370 QEMU NVMe Ctrl (12341 ): 3672 I/Os completed (+2112) 00:11:42.370 00:11:43.306 QEMU NVMe Ctrl (12340 ): 5673 I/Os completed (+2001) 00:11:43.306 QEMU NVMe Ctrl (12341 ): 5675 I/Os completed (+2003) 00:11:43.306 00:11:44.281 QEMU NVMe Ctrl (12340 ): 7548 I/Os completed (+1875) 00:11:44.281 QEMU NVMe Ctrl (12341 ): 7548 I/Os completed (+1873) 00:11:44.281 00:11:45.668 QEMU NVMe Ctrl (12340 ): 9496 I/Os completed (+1948) 00:11:45.668 QEMU NVMe Ctrl (12341 ): 9496 I/Os completed (+1948) 00:11:45.668 00:11:46.236 15:04:46 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:46.236 15:04:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:46.236 15:04:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:46.236 [2024-11-20 15:04:46.834621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:46.236 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:46.236 [2024-11-20 15:04:46.836594] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.236 [2024-11-20 15:04:46.836776] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.236 [2024-11-20 15:04:46.836809] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.236 [2024-11-20 15:04:46.836836] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.236 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:46.236 [2024-11-20 15:04:46.840169] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.236 [2024-11-20 15:04:46.840310] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.236 [2024-11-20 15:04:46.840362] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.236 [2024-11-20 15:04:46.840464] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.236 15:04:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:46.236 15:04:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:46.236 [2024-11-20 15:04:46.877620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:46.236 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:46.236 [2024-11-20 15:04:46.879488] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.236 [2024-11-20 15:04:46.879643] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.237 [2024-11-20 15:04:46.879707] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.237 [2024-11-20 15:04:46.879818] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.237 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:46.237 [2024-11-20 15:04:46.882879] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.237 [2024-11-20 15:04:46.882972] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.237 [2024-11-20 15:04:46.883016] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.237 [2024-11-20 15:04:46.883051] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.237 15:04:46 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:46.237 15:04:46 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:46.237 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:46.237 EAL: Scan for (pci) bus failed. 00:11:46.237 15:04:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:46.237 15:04:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:46.237 15:04:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:46.495 00:11:46.495 15:04:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:46.495 15:04:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:46.495 15:04:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:46.495 15:04:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:46.495 15:04:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:46.495 Attaching to 0000:00:10.0 00:11:46.495 Attached to 0000:00:10.0 00:11:46.495 15:04:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:46.496 15:04:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:46.496 15:04:47 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:46.496 Attaching to 0000:00:11.0 00:11:46.496 Attached to 0000:00:11.0 00:11:47.432 QEMU NVMe Ctrl (12340 ): 1677 I/Os completed (+1677) 00:11:47.432 QEMU NVMe Ctrl (12341 ): 1483 I/Os completed (+1483) 00:11:47.432 00:11:48.369 QEMU NVMe Ctrl (12340 ): 3372 I/Os completed (+1695) 00:11:48.369 QEMU NVMe Ctrl (12341 ): 3273 I/Os completed (+1790) 00:11:48.369 00:11:49.303 QEMU NVMe Ctrl (12340 ): 5149 I/Os completed (+1777) 00:11:49.303 QEMU NVMe Ctrl (12341 ): 5084 I/Os completed (+1811) 00:11:49.303 00:11:50.289 QEMU NVMe Ctrl (12340 ): 6977 I/Os completed (+1828) 00:11:50.289 QEMU NVMe Ctrl (12341 ): 6920 I/Os completed (+1836) 00:11:50.289 00:11:51.666 QEMU NVMe Ctrl (12340 ): 8861 I/Os completed (+1884) 00:11:51.666 QEMU NVMe Ctrl (12341 ): 8804 I/Os completed (+1884) 00:11:51.666 00:11:52.601 QEMU NVMe Ctrl (12340 ): 10701 I/Os completed (+1840) 00:11:52.601 QEMU NVMe Ctrl (12341 ): 10644 I/Os completed (+1840) 00:11:52.601 00:11:53.538 QEMU NVMe Ctrl (12340 ): 12597 I/Os completed (+1896) 00:11:53.538 QEMU NVMe Ctrl (12341 ): 12544 I/Os completed (+1900) 00:11:53.538 00:11:54.474 QEMU NVMe Ctrl (12340 ): 14437 I/Os completed (+1840) 00:11:54.474 QEMU NVMe Ctrl (12341 ): 14416 I/Os completed (+1872) 00:11:54.474 00:11:55.412 QEMU NVMe Ctrl (12340 ): 16365 I/Os completed (+1928) 00:11:55.412 QEMU NVMe Ctrl (12341 ): 16344 I/Os completed (+1928) 00:11:55.412 00:11:56.387 QEMU NVMe Ctrl (12340 ): 18261 I/Os completed (+1896) 00:11:56.387 QEMU NVMe Ctrl (12341 ): 18240 I/Os completed (+1896) 00:11:56.387 00:11:57.324 QEMU NVMe Ctrl (12340 ): 20169 I/Os completed (+1908) 00:11:57.324 QEMU NVMe Ctrl (12341 ): 20161 I/Os completed (+1921) 00:11:57.324 00:11:58.259 QEMU NVMe Ctrl (12340 ): 21965 I/Os completed (+1796) 00:11:58.259 QEMU NVMe Ctrl (12341 ): 21974 I/Os completed (+1813) 00:11:58.259 00:11:58.523 15:04:59 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:58.523 15:04:59 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:58.523 15:04:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:58.523 15:04:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:58.523 [2024-11-20 15:04:59.225451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:58.523 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:58.523 [2024-11-20 15:04:59.227642] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.523 [2024-11-20 15:04:59.227836] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.523 [2024-11-20 15:04:59.227970] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.523 [2024-11-20 15:04:59.228032] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.523 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:58.523 [2024-11-20 15:04:59.231391] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.523 [2024-11-20 15:04:59.231561] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.523 [2024-11-20 15:04:59.231656] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.523 [2024-11-20 15:04:59.231705] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.523 15:04:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:58.523 15:04:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:58.523 [2024-11-20 15:04:59.270344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:58.523 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:58.523 [2024-11-20 15:04:59.272479] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.523 [2024-11-20 15:04:59.272641] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.523 [2024-11-20 15:04:59.272793] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.523 [2024-11-20 15:04:59.272887] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.523 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:58.523 [2024-11-20 15:04:59.276423] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.523 [2024-11-20 15:04:59.276497] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.523 [2024-11-20 15:04:59.276529] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.523 [2024-11-20 15:04:59.276558] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.523 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:58.523 EAL: Scan for (pci) bus failed. 00:11:58.523 15:04:59 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:58.523 15:04:59 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:58.781 15:04:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:58.781 15:04:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:58.781 15:04:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:58.781 15:04:59 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:58.781 15:04:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:58.781 15:04:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:58.781 15:04:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:58.781 15:04:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:58.781 Attaching to 0000:00:10.0 00:11:58.781 Attached to 0000:00:10.0 00:11:58.781 15:04:59 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:58.781 15:04:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:58.781 15:04:59 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:58.781 Attaching to 0000:00:11.0 00:11:58.781 Attached to 0000:00:11.0 00:11:59.349 QEMU NVMe Ctrl (12340 ): 1074 I/Os completed (+1074) 00:11:59.349 QEMU NVMe Ctrl (12341 ): 891 I/Os completed (+891) 00:11:59.349 00:12:00.287 QEMU NVMe Ctrl (12340 ): 2898 I/Os completed (+1824) 00:12:00.287 QEMU NVMe Ctrl (12341 ): 2716 I/Os completed (+1825) 00:12:00.287 00:12:01.665 QEMU NVMe Ctrl (12340 ): 4846 I/Os completed (+1948) 00:12:01.665 QEMU NVMe Ctrl (12341 ): 4668 I/Os completed (+1952) 00:12:01.665 00:12:02.231 QEMU NVMe Ctrl (12340 ): 6850 I/Os completed (+2004) 00:12:02.231 QEMU NVMe Ctrl (12341 ): 6672 I/Os completed (+2004) 00:12:02.231 00:12:03.268 QEMU NVMe Ctrl (12340 ): 8842 I/Os completed (+1992) 00:12:03.268 QEMU NVMe Ctrl (12341 ): 8717 I/Os completed (+2045) 00:12:03.268 00:12:04.649 QEMU NVMe Ctrl (12340 ): 10850 I/Os completed (+2008) 00:12:04.649 QEMU NVMe Ctrl (12341 ): 10725 I/Os completed (+2008) 00:12:04.649 00:12:05.588 QEMU NVMe Ctrl (12340 ): 12830 I/Os completed (+1980) 00:12:05.588 QEMU NVMe Ctrl (12341 ): 12705 I/Os completed (+1980) 00:12:05.588 00:12:06.524 QEMU NVMe Ctrl (12340 ): 14806 I/Os completed (+1976) 00:12:06.524 QEMU NVMe Ctrl (12341 ): 14684 I/Os completed (+1979) 00:12:06.524 00:12:07.461 QEMU NVMe Ctrl (12340 ): 16794 I/Os completed (+1988) 00:12:07.461 QEMU NVMe Ctrl (12341 ): 16672 I/Os completed (+1988) 00:12:07.461 00:12:08.399 QEMU NVMe Ctrl (12340 ): 18794 I/Os completed (+2000) 00:12:08.399 QEMU NVMe Ctrl (12341 ): 18672 I/Os completed (+2000) 00:12:08.399 00:12:09.336 QEMU NVMe Ctrl (12340 ): 20754 I/Os completed (+1960) 00:12:09.336 QEMU NVMe Ctrl (12341 ): 20632 I/Os completed (+1960) 00:12:09.336 00:12:10.274 QEMU NVMe Ctrl (12340 ): 22774 I/Os completed (+2020) 00:12:10.274 QEMU NVMe Ctrl (12341 ): 22652 I/Os completed (+2020) 00:12:10.274 00:12:10.843 15:05:11 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:12:10.843 15:05:11 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:10.843 15:05:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:10.843 15:05:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:10.843 [2024-11-20 15:05:11.619728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:10.843 Controller removed: QEMU NVMe Ctrl (12340 ) 00:12:10.843 [2024-11-20 15:05:11.622774] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:10.843 [2024-11-20 15:05:11.622844] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:10.843 [2024-11-20 15:05:11.622867] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:10.843 [2024-11-20 15:05:11.622891] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:10.843 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:10.843 [2024-11-20 15:05:11.629086] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:10.843 [2024-11-20 15:05:11.629150] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:10.843 [2024-11-20 15:05:11.629171] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:10.843 [2024-11-20 15:05:11.629193] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:10.843 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/vendor 00:12:10.843 EAL: Scan for (pci) bus failed. 00:12:10.843 15:05:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:10.843 15:05:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:10.843 [2024-11-20 15:05:11.660929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:10.843 Controller removed: QEMU NVMe Ctrl (12341 ) 00:12:10.843 [2024-11-20 15:05:11.662596] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:10.843 [2024-11-20 15:05:11.662651] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:10.843 [2024-11-20 15:05:11.662676] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:10.843 [2024-11-20 15:05:11.662698] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:10.843 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:10.843 [2024-11-20 15:05:11.665406] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:10.843 [2024-11-20 15:05:11.665454] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:10.843 [2024-11-20 15:05:11.665481] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:10.843 [2024-11-20 15:05:11.665500] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.102 15:05:11 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:12:11.102 15:05:11 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:11.102 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:12:11.102 EAL: Scan for (pci) bus failed. 00:12:11.102 15:05:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:11.102 15:05:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:11.102 15:05:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:11.102 15:05:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:11.102 15:05:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:11.102 15:05:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:11.102 15:05:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:11.102 15:05:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:11.102 Attaching to 0000:00:10.0 00:12:11.102 Attached to 0000:00:10.0 00:12:11.362 15:05:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:11.362 15:05:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:11.362 15:05:11 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:11.362 Attaching to 0000:00:11.0 00:12:11.362 Attached to 0000:00:11.0 00:12:11.362 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:11.362 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:11.362 [2024-11-20 15:05:11.996644] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:12:23.576 15:05:23 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:12:23.576 15:05:23 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:23.576 15:05:23 sw_hotplug -- common/autotest_common.sh@719 -- # time=43.16 00:12:23.576 15:05:23 sw_hotplug -- common/autotest_common.sh@720 -- # echo 43.16 00:12:23.576 15:05:23 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:23.576 15:05:23 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.16 00:12:23.576 15:05:23 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.16 2 00:12:23.576 remove_attach_helper took 43.16s to complete (handling 2 nvme drive(s)) 15:05:23 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:12:30.186 15:05:29 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68278 00:12:30.186 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68278) - No such process 00:12:30.186 15:05:30 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68278 00:12:30.186 15:05:30 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:12:30.186 15:05:30 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:12:30.186 15:05:30 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:12:30.186 15:05:30 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=68823 00:12:30.186 15:05:30 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:30.186 15:05:30 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:12:30.186 15:05:30 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 68823 00:12:30.186 15:05:30 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 68823 ']' 00:12:30.186 15:05:30 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:30.186 15:05:30 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:30.186 15:05:30 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:30.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:30.186 15:05:30 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:30.186 15:05:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:30.186 [2024-11-20 15:05:30.130998] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:12:30.186 [2024-11-20 15:05:30.131158] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68823 ] 00:12:30.186 [2024-11-20 15:05:30.317390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:30.186 [2024-11-20 15:05:30.461865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.756 15:05:31 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:30.756 15:05:31 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:12:30.756 15:05:31 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:12:30.756 15:05:31 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.756 15:05:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:30.756 15:05:31 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.756 15:05:31 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:12:30.756 15:05:31 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:30.756 15:05:31 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:12:30.756 15:05:31 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:12:30.756 15:05:31 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:12:30.756 15:05:31 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:12:30.756 15:05:31 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:12:30.756 15:05:31 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:12:30.756 15:05:31 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:30.756 15:05:31 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:30.756 15:05:31 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:12:30.756 15:05:31 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:30.756 15:05:31 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:37.344 15:05:37 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:37.344 15:05:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:37.344 15:05:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:37.344 15:05:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:37.344 15:05:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:37.344 15:05:37 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:37.344 15:05:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:37.344 15:05:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:37.344 [2024-11-20 15:05:37.579886] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:37.344 [2024-11-20 15:05:37.582898] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:37.344 [2024-11-20 15:05:37.583081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:37.344 [2024-11-20 15:05:37.583112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:37.344 [2024-11-20 15:05:37.583147] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:37.344 [2024-11-20 15:05:37.583160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:37.344 [2024-11-20 15:05:37.583175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:37.344 [2024-11-20 15:05:37.583191] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:37.344 [2024-11-20 15:05:37.583206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:37.344 [2024-11-20 15:05:37.583218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:37.344 [2024-11-20 15:05:37.583241] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:37.344 [2024-11-20 15:05:37.583253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:37.344 [2024-11-20 15:05:37.583268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:37.344 15:05:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:37.344 15:05:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:37.344 15:05:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:37.344 15:05:37 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.344 15:05:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:37.344 15:05:37 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.344 15:05:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:37.344 15:05:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:37.344 [2024-11-20 15:05:37.979259] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:37.344 [2024-11-20 15:05:37.982273] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:37.344 [2024-11-20 15:05:37.982322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:37.344 [2024-11-20 15:05:37.982345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:37.344 [2024-11-20 15:05:37.982375] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:37.344 [2024-11-20 15:05:37.982391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:37.344 [2024-11-20 15:05:37.982403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:37.344 [2024-11-20 15:05:37.982421] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:37.344 [2024-11-20 15:05:37.982433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:37.344 [2024-11-20 15:05:37.982448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:37.344 [2024-11-20 15:05:37.982461] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:37.344 [2024-11-20 15:05:37.982475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:37.344 [2024-11-20 15:05:37.982487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:37.344 15:05:38 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:37.344 15:05:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:37.344 15:05:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:37.344 15:05:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:37.344 15:05:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:37.344 15:05:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:37.344 15:05:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.344 15:05:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:37.344 15:05:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.604 15:05:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:37.604 15:05:38 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:37.604 15:05:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:37.604 15:05:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:37.604 15:05:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:37.604 15:05:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:37.604 15:05:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:37.604 15:05:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:37.604 15:05:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:37.604 15:05:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:37.863 15:05:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:37.863 15:05:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:37.863 15:05:38 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:50.066 15:05:50 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:50.066 15:05:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:50.066 15:05:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:50.066 15:05:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:50.066 15:05:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:50.066 15:05:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:50.066 15:05:50 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.066 15:05:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:50.066 15:05:50 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.066 15:05:50 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:50.066 15:05:50 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:50.066 15:05:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:50.066 15:05:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:50.066 15:05:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:50.066 15:05:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:50.066 15:05:50 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:50.066 15:05:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:50.066 15:05:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:50.066 15:05:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:50.066 15:05:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:50.066 15:05:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:50.066 15:05:50 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.066 15:05:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:50.066 [2024-11-20 15:05:50.658898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:50.066 [2024-11-20 15:05:50.661644] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.066 [2024-11-20 15:05:50.661698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:50.066 [2024-11-20 15:05:50.661733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:50.066 [2024-11-20 15:05:50.661766] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.066 [2024-11-20 15:05:50.661778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:50.066 [2024-11-20 15:05:50.661794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:50.066 [2024-11-20 15:05:50.661808] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.067 [2024-11-20 15:05:50.661823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:50.067 [2024-11-20 15:05:50.661835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:50.067 [2024-11-20 15:05:50.661851] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.067 [2024-11-20 15:05:50.661862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:50.067 [2024-11-20 15:05:50.661877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:50.067 15:05:50 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.067 15:05:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:50.067 15:05:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:50.326 [2024-11-20 15:05:51.058258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:50.326 [2024-11-20 15:05:51.061228] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.326 [2024-11-20 15:05:51.061401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:50.326 [2024-11-20 15:05:51.061440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:50.326 [2024-11-20 15:05:51.061471] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.326 [2024-11-20 15:05:51.061487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:50.326 [2024-11-20 15:05:51.061499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:50.326 [2024-11-20 15:05:51.061517] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.326 [2024-11-20 15:05:51.061529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:50.326 [2024-11-20 15:05:51.061545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:50.326 [2024-11-20 15:05:51.061559] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.326 [2024-11-20 15:05:51.061574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:50.326 [2024-11-20 15:05:51.061586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:50.585 15:05:51 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:50.586 15:05:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:50.586 15:05:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:50.586 15:05:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:50.586 15:05:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:50.586 15:05:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:50.586 15:05:51 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.586 15:05:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:50.586 15:05:51 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.586 15:05:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:50.586 15:05:51 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:50.586 15:05:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:50.586 15:05:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:50.586 15:05:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:50.845 15:05:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:50.845 15:05:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:50.845 15:05:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:50.845 15:05:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:50.845 15:05:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:50.845 15:05:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:50.845 15:05:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:50.845 15:05:51 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:03.061 15:06:03 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:03.061 15:06:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:03.061 15:06:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:03.061 15:06:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:03.061 15:06:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:03.061 15:06:03 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.061 15:06:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:03.061 15:06:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:03.061 15:06:03 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.061 15:06:03 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:03.061 15:06:03 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:03.061 15:06:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:03.061 15:06:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:03.061 [2024-11-20 15:06:03.638100] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:03.061 [2024-11-20 15:06:03.641258] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:03.061 [2024-11-20 15:06:03.641309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.061 [2024-11-20 15:06:03.641328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.061 [2024-11-20 15:06:03.641360] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:03.061 [2024-11-20 15:06:03.641373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.061 [2024-11-20 15:06:03.641393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.061 [2024-11-20 15:06:03.641407] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:03.061 [2024-11-20 15:06:03.641422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.061 [2024-11-20 15:06:03.641435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.061 [2024-11-20 15:06:03.641451] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:03.061 [2024-11-20 15:06:03.641462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.061 [2024-11-20 15:06:03.641477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.061 15:06:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:03.061 15:06:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:03.061 15:06:03 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:03.061 15:06:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:03.061 15:06:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:03.061 15:06:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:03.061 15:06:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:03.061 15:06:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:03.061 15:06:03 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.061 15:06:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:03.061 15:06:03 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.061 15:06:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:13:03.061 15:06:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:03.320 [2024-11-20 15:06:04.137327] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:03.320 [2024-11-20 15:06:04.140342] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:03.320 [2024-11-20 15:06:04.140392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.320 [2024-11-20 15:06:04.140415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.320 [2024-11-20 15:06:04.140446] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:03.320 [2024-11-20 15:06:04.140461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.320 [2024-11-20 15:06:04.140474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.320 [2024-11-20 15:06:04.140491] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:03.320 [2024-11-20 15:06:04.140503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.320 [2024-11-20 15:06:04.140523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.320 [2024-11-20 15:06:04.140537] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:03.320 [2024-11-20 15:06:04.140552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.320 [2024-11-20 15:06:04.140564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.581 15:06:04 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:13:03.581 15:06:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:03.581 15:06:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:03.581 15:06:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:03.581 15:06:04 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.581 15:06:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:03.581 15:06:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:03.581 15:06:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:03.581 15:06:04 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.581 15:06:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:03.581 15:06:04 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:03.581 15:06:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:03.581 15:06:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:03.581 15:06:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:03.840 15:06:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:03.840 15:06:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:03.840 15:06:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:03.840 15:06:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:03.840 15:06:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:03.840 15:06:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:03.840 15:06:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:03.840 15:06:04 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:16.071 15:06:16 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:16.071 15:06:16 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:16.071 15:06:16 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:16.071 15:06:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:16.071 15:06:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:16.071 15:06:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:16.071 15:06:16 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.071 15:06:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:16.071 15:06:16 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.071 15:06:16 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:16.071 15:06:16 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:16.071 15:06:16 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.19 00:13:16.071 15:06:16 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.19 00:13:16.071 15:06:16 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:13:16.071 15:06:16 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.19 00:13:16.071 15:06:16 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.19 2 00:13:16.071 remove_attach_helper took 45.19s to complete (handling 2 nvme drive(s)) 15:06:16 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:13:16.071 15:06:16 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.071 15:06:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:16.071 15:06:16 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.071 15:06:16 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:13:16.071 15:06:16 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.071 15:06:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:16.071 15:06:16 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.071 15:06:16 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:13:16.072 15:06:16 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:13:16.072 15:06:16 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:13:16.072 15:06:16 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:13:16.072 15:06:16 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:13:16.072 15:06:16 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:13:16.072 15:06:16 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:13:16.072 15:06:16 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:13:16.072 15:06:16 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:13:16.072 15:06:16 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:13:16.072 15:06:16 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:13:16.072 15:06:16 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:13:16.072 15:06:16 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:13:22.678 15:06:22 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:22.678 15:06:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:22.678 15:06:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:22.678 15:06:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:22.678 15:06:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:22.678 15:06:22 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:22.678 15:06:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:22.678 15:06:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:22.678 15:06:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:22.678 15:06:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:22.678 15:06:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:22.678 15:06:22 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.678 15:06:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:22.678 [2024-11-20 15:06:22.811055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:22.678 [2024-11-20 15:06:22.813083] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:22.678 [2024-11-20 15:06:22.813128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:22.678 [2024-11-20 15:06:22.813148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.678 [2024-11-20 15:06:22.813179] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:22.678 [2024-11-20 15:06:22.813192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:22.678 [2024-11-20 15:06:22.813208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.678 [2024-11-20 15:06:22.813221] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:22.678 [2024-11-20 15:06:22.813239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:22.678 [2024-11-20 15:06:22.813251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.678 [2024-11-20 15:06:22.813277] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:22.678 [2024-11-20 15:06:22.813289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:22.678 [2024-11-20 15:06:22.813309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.678 15:06:22 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.678 15:06:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:13:22.678 15:06:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:22.678 [2024-11-20 15:06:23.210452] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:22.678 [2024-11-20 15:06:23.212648] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:22.678 [2024-11-20 15:06:23.212696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:22.678 [2024-11-20 15:06:23.212864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.678 [2024-11-20 15:06:23.212912] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:22.678 [2024-11-20 15:06:23.212933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:22.678 [2024-11-20 15:06:23.212947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.678 [2024-11-20 15:06:23.212965] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:22.678 [2024-11-20 15:06:23.212977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:22.678 [2024-11-20 15:06:23.212992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.678 [2024-11-20 15:06:23.213007] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:22.678 [2024-11-20 15:06:23.213022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:22.678 [2024-11-20 15:06:23.213034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.678 15:06:23 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:22.678 15:06:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:22.679 15:06:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:22.679 15:06:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:22.679 15:06:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:22.679 15:06:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:22.679 15:06:23 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.679 15:06:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:22.679 15:06:23 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.679 15:06:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:22.679 15:06:23 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:22.937 15:06:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:22.937 15:06:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:22.937 15:06:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:22.937 15:06:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:22.937 15:06:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:22.937 15:06:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:22.937 15:06:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:22.937 15:06:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:22.937 15:06:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:22.937 15:06:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:22.937 15:06:23 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:35.149 15:06:35 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:35.149 15:06:35 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:35.149 15:06:35 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:35.149 15:06:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:35.149 15:06:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:35.149 15:06:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:35.149 15:06:35 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.149 15:06:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:35.150 15:06:35 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.150 15:06:35 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:35.150 15:06:35 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:35.150 15:06:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:35.150 15:06:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:35.150 15:06:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:35.150 15:06:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:35.150 15:06:35 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:35.150 15:06:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:35.150 15:06:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:35.150 15:06:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:35.150 15:06:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:35.150 15:06:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:35.150 15:06:35 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.150 15:06:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:35.150 [2024-11-20 15:06:35.890072] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:35.150 [2024-11-20 15:06:35.892228] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:35.150 [2024-11-20 15:06:35.892290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:35.150 [2024-11-20 15:06:35.892310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:35.150 [2024-11-20 15:06:35.892343] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:35.150 [2024-11-20 15:06:35.892355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:35.150 [2024-11-20 15:06:35.892371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:35.150 [2024-11-20 15:06:35.892385] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:35.150 [2024-11-20 15:06:35.892400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:35.150 [2024-11-20 15:06:35.892412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:35.150 [2024-11-20 15:06:35.892429] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:35.150 [2024-11-20 15:06:35.892440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:35.150 [2024-11-20 15:06:35.892457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:35.150 15:06:35 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.150 15:06:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:13:35.150 15:06:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:35.715 [2024-11-20 15:06:36.289443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:35.715 [2024-11-20 15:06:36.291662] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:35.715 [2024-11-20 15:06:36.291710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:35.715 [2024-11-20 15:06:36.291889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:35.715 [2024-11-20 15:06:36.291925] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:35.715 [2024-11-20 15:06:36.291949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:35.715 [2024-11-20 15:06:36.291962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:35.715 [2024-11-20 15:06:36.291980] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:35.715 [2024-11-20 15:06:36.291992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:35.715 [2024-11-20 15:06:36.292006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:35.715 [2024-11-20 15:06:36.292022] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:35.715 [2024-11-20 15:06:36.292036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:35.715 [2024-11-20 15:06:36.292049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:35.715 15:06:36 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:35.715 15:06:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:35.715 15:06:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:35.715 15:06:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:35.715 15:06:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:35.715 15:06:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:35.715 15:06:36 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.715 15:06:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:35.715 15:06:36 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.715 15:06:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:35.715 15:06:36 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:35.973 15:06:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:35.973 15:06:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:35.973 15:06:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:35.973 15:06:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:35.973 15:06:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:35.973 15:06:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:35.973 15:06:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:35.973 15:06:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:35.973 15:06:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:35.973 15:06:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:35.973 15:06:36 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:48.172 15:06:48 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:48.172 15:06:48 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:48.172 15:06:48 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:48.172 15:06:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:48.172 15:06:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:48.172 15:06:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:48.172 15:06:48 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.172 15:06:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:48.172 15:06:48 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.172 15:06:48 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:48.172 15:06:48 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:48.172 15:06:48 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:48.172 15:06:48 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:48.172 15:06:48 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:48.172 15:06:48 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:48.172 15:06:48 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:48.172 15:06:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:48.172 15:06:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:48.172 15:06:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:48.172 15:06:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:48.172 15:06:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:48.172 15:06:48 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.172 15:06:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:48.172 [2024-11-20 15:06:48.969048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:48.172 [2024-11-20 15:06:48.971348] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:48.172 [2024-11-20 15:06:48.971397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.172 [2024-11-20 15:06:48.971416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.172 [2024-11-20 15:06:48.971450] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:48.172 [2024-11-20 15:06:48.971463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.173 [2024-11-20 15:06:48.971478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.173 [2024-11-20 15:06:48.971493] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:48.173 [2024-11-20 15:06:48.971512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.173 [2024-11-20 15:06:48.971525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.173 [2024-11-20 15:06:48.971541] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:48.173 [2024-11-20 15:06:48.971553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.173 [2024-11-20 15:06:48.971568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.173 15:06:48 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.173 15:06:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:13:48.173 15:06:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:48.739 [2024-11-20 15:06:49.468266] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:48.739 [2024-11-20 15:06:49.470656] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:48.739 [2024-11-20 15:06:49.470839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.739 [2024-11-20 15:06:49.470958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.739 [2024-11-20 15:06:49.471027] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:48.739 [2024-11-20 15:06:49.471126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.739 [2024-11-20 15:06:49.471184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.739 [2024-11-20 15:06:49.471291] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:48.739 [2024-11-20 15:06:49.471331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.739 [2024-11-20 15:06:49.471368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.739 [2024-11-20 15:06:49.471383] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:48.739 [2024-11-20 15:06:49.471404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.739 [2024-11-20 15:06:49.471416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.739 15:06:49 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:48.739 15:06:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:48.739 15:06:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:48.739 15:06:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:48.739 15:06:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:48.739 15:06:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:48.739 15:06:49 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.739 15:06:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:48.739 15:06:49 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.739 15:06:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:48.739 15:06:49 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:49.015 15:06:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:49.015 15:06:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:49.015 15:06:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:49.015 15:06:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:49.015 15:06:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:49.015 15:06:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:49.015 15:06:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:49.015 15:06:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:49.291 15:06:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:49.291 15:06:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:49.291 15:06:49 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:01.499 15:07:01 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:01.499 15:07:01 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:01.499 15:07:01 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:01.499 15:07:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:01.499 15:07:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:01.499 15:07:01 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.499 15:07:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:01.499 15:07:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:01.499 15:07:01 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.499 15:07:01 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:01.499 15:07:01 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:01.499 15:07:01 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.18 00:14:01.499 15:07:01 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.18 00:14:01.499 15:07:01 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:14:01.499 15:07:01 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.18 00:14:01.499 15:07:01 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.18 2 00:14:01.499 remove_attach_helper took 45.18s to complete (handling 2 nvme drive(s)) 15:07:01 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:14:01.499 15:07:01 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 68823 00:14:01.499 15:07:01 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 68823 ']' 00:14:01.499 15:07:01 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 68823 00:14:01.499 15:07:01 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:14:01.499 15:07:01 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:01.499 15:07:01 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68823 00:14:01.499 killing process with pid 68823 00:14:01.499 15:07:01 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:01.499 15:07:01 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:01.499 15:07:01 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68823' 00:14:01.499 15:07:01 sw_hotplug -- common/autotest_common.sh@973 -- # kill 68823 00:14:01.499 15:07:01 sw_hotplug -- common/autotest_common.sh@978 -- # wait 68823 00:14:04.037 15:07:04 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:04.297 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:04.867 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:04.867 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:04.867 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:04.867 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:04.867 00:14:04.867 real 2m33.871s 00:14:04.867 user 1m52.351s 00:14:04.867 sys 0m21.808s 00:14:04.867 15:07:05 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:04.867 ************************************ 00:14:04.867 END TEST sw_hotplug 00:14:04.867 15:07:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:04.867 ************************************ 00:14:05.127 15:07:05 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:14:05.127 15:07:05 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:14:05.127 15:07:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:05.127 15:07:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:05.127 15:07:05 -- common/autotest_common.sh@10 -- # set +x 00:14:05.127 ************************************ 00:14:05.127 START TEST nvme_xnvme 00:14:05.127 ************************************ 00:14:05.127 15:07:05 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:14:05.127 * Looking for test storage... 00:14:05.127 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:14:05.127 15:07:05 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:05.127 15:07:05 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:14:05.127 15:07:05 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:05.390 15:07:05 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:05.390 15:07:05 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:05.390 15:07:05 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:05.390 15:07:05 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:05.390 15:07:05 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:14:05.390 15:07:05 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:14:05.390 15:07:05 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:14:05.390 15:07:05 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:14:05.390 15:07:05 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:14:05.390 15:07:05 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:14:05.390 15:07:05 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:14:05.390 15:07:05 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:05.390 15:07:05 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:14:05.390 15:07:05 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:14:05.390 15:07:05 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:05.390 15:07:05 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:05.390 15:07:05 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:14:05.390 15:07:05 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:14:05.390 15:07:05 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:05.390 15:07:05 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:14:05.390 15:07:05 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:14:05.390 15:07:05 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:14:05.390 15:07:06 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:14:05.390 15:07:06 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:05.390 15:07:06 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:14:05.390 15:07:06 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:14:05.390 15:07:06 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:05.390 15:07:06 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:05.390 15:07:06 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:14:05.390 15:07:06 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:05.390 15:07:06 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:05.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.390 --rc genhtml_branch_coverage=1 00:14:05.390 --rc genhtml_function_coverage=1 00:14:05.390 --rc genhtml_legend=1 00:14:05.390 --rc geninfo_all_blocks=1 00:14:05.390 --rc geninfo_unexecuted_blocks=1 00:14:05.390 00:14:05.390 ' 00:14:05.390 15:07:06 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:05.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.390 --rc genhtml_branch_coverage=1 00:14:05.390 --rc genhtml_function_coverage=1 00:14:05.390 --rc genhtml_legend=1 00:14:05.390 --rc geninfo_all_blocks=1 00:14:05.390 --rc geninfo_unexecuted_blocks=1 00:14:05.390 00:14:05.390 ' 00:14:05.390 15:07:06 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:05.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.390 --rc genhtml_branch_coverage=1 00:14:05.390 --rc genhtml_function_coverage=1 00:14:05.390 --rc genhtml_legend=1 00:14:05.390 --rc geninfo_all_blocks=1 00:14:05.390 --rc geninfo_unexecuted_blocks=1 00:14:05.390 00:14:05.390 ' 00:14:05.390 15:07:06 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:05.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.390 --rc genhtml_branch_coverage=1 00:14:05.390 --rc genhtml_function_coverage=1 00:14:05.390 --rc genhtml_legend=1 00:14:05.390 --rc geninfo_all_blocks=1 00:14:05.390 --rc geninfo_unexecuted_blocks=1 00:14:05.390 00:14:05.390 ' 00:14:05.390 15:07:06 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:14:05.390 15:07:06 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:14:05.390 15:07:06 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:14:05.391 15:07:06 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:14:05.391 15:07:06 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:14:05.391 15:07:06 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:14:05.391 15:07:06 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:14:05.391 15:07:06 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:14:05.391 15:07:06 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:14:05.391 15:07:06 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:14:05.391 15:07:06 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:14:05.391 15:07:06 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:14:05.391 15:07:06 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:14:05.391 15:07:06 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:14:05.391 15:07:06 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:14:05.391 15:07:06 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:14:05.391 15:07:06 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:14:05.391 15:07:06 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:14:05.391 15:07:06 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:14:05.391 15:07:06 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:14:05.391 15:07:06 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:14:05.391 15:07:06 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:14:05.391 15:07:06 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:14:05.391 15:07:06 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:14:05.391 15:07:06 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:14:05.391 15:07:06 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:14:05.391 15:07:06 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:14:05.391 #define SPDK_CONFIG_H 00:14:05.391 #define SPDK_CONFIG_AIO_FSDEV 1 00:14:05.391 #define SPDK_CONFIG_APPS 1 00:14:05.391 #define SPDK_CONFIG_ARCH native 00:14:05.391 #define SPDK_CONFIG_ASAN 1 00:14:05.391 #undef SPDK_CONFIG_AVAHI 00:14:05.391 #undef SPDK_CONFIG_CET 00:14:05.391 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:14:05.391 #define SPDK_CONFIG_COVERAGE 1 00:14:05.391 #define SPDK_CONFIG_CROSS_PREFIX 00:14:05.391 #undef SPDK_CONFIG_CRYPTO 00:14:05.391 #undef SPDK_CONFIG_CRYPTO_MLX5 00:14:05.391 #undef SPDK_CONFIG_CUSTOMOCF 00:14:05.391 #undef SPDK_CONFIG_DAOS 00:14:05.391 #define SPDK_CONFIG_DAOS_DIR 00:14:05.391 #define SPDK_CONFIG_DEBUG 1 00:14:05.391 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:14:05.391 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:14:05.391 #define SPDK_CONFIG_DPDK_INC_DIR 00:14:05.391 #define SPDK_CONFIG_DPDK_LIB_DIR 00:14:05.391 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:14:05.391 #undef SPDK_CONFIG_DPDK_UADK 00:14:05.391 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:14:05.391 #define SPDK_CONFIG_EXAMPLES 1 00:14:05.391 #undef SPDK_CONFIG_FC 00:14:05.391 #define SPDK_CONFIG_FC_PATH 00:14:05.391 #define SPDK_CONFIG_FIO_PLUGIN 1 00:14:05.392 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:14:05.392 #define SPDK_CONFIG_FSDEV 1 00:14:05.392 #undef SPDK_CONFIG_FUSE 00:14:05.392 #undef SPDK_CONFIG_FUZZER 00:14:05.392 #define SPDK_CONFIG_FUZZER_LIB 00:14:05.392 #undef SPDK_CONFIG_GOLANG 00:14:05.392 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:14:05.392 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:14:05.392 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:14:05.392 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:14:05.392 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:14:05.392 #undef SPDK_CONFIG_HAVE_LIBBSD 00:14:05.392 #undef SPDK_CONFIG_HAVE_LZ4 00:14:05.392 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:14:05.392 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:14:05.392 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:14:05.392 #define SPDK_CONFIG_IDXD 1 00:14:05.392 #define SPDK_CONFIG_IDXD_KERNEL 1 00:14:05.392 #undef SPDK_CONFIG_IPSEC_MB 00:14:05.392 #define SPDK_CONFIG_IPSEC_MB_DIR 00:14:05.392 #define SPDK_CONFIG_ISAL 1 00:14:05.392 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:14:05.392 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:14:05.392 #define SPDK_CONFIG_LIBDIR 00:14:05.392 #undef SPDK_CONFIG_LTO 00:14:05.392 #define SPDK_CONFIG_MAX_LCORES 128 00:14:05.392 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:14:05.392 #define SPDK_CONFIG_NVME_CUSE 1 00:14:05.392 #undef SPDK_CONFIG_OCF 00:14:05.392 #define SPDK_CONFIG_OCF_PATH 00:14:05.392 #define SPDK_CONFIG_OPENSSL_PATH 00:14:05.392 #undef SPDK_CONFIG_PGO_CAPTURE 00:14:05.392 #define SPDK_CONFIG_PGO_DIR 00:14:05.392 #undef SPDK_CONFIG_PGO_USE 00:14:05.392 #define SPDK_CONFIG_PREFIX /usr/local 00:14:05.392 #undef SPDK_CONFIG_RAID5F 00:14:05.392 #undef SPDK_CONFIG_RBD 00:14:05.392 #define SPDK_CONFIG_RDMA 1 00:14:05.392 #define SPDK_CONFIG_RDMA_PROV verbs 00:14:05.392 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:14:05.392 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:14:05.392 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:14:05.392 #define SPDK_CONFIG_SHARED 1 00:14:05.392 #undef SPDK_CONFIG_SMA 00:14:05.392 #define SPDK_CONFIG_TESTS 1 00:14:05.392 #undef SPDK_CONFIG_TSAN 00:14:05.392 #define SPDK_CONFIG_UBLK 1 00:14:05.392 #define SPDK_CONFIG_UBSAN 1 00:14:05.392 #undef SPDK_CONFIG_UNIT_TESTS 00:14:05.392 #undef SPDK_CONFIG_URING 00:14:05.392 #define SPDK_CONFIG_URING_PATH 00:14:05.392 #undef SPDK_CONFIG_URING_ZNS 00:14:05.392 #undef SPDK_CONFIG_USDT 00:14:05.392 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:14:05.392 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:14:05.392 #undef SPDK_CONFIG_VFIO_USER 00:14:05.392 #define SPDK_CONFIG_VFIO_USER_DIR 00:14:05.392 #define SPDK_CONFIG_VHOST 1 00:14:05.392 #define SPDK_CONFIG_VIRTIO 1 00:14:05.392 #undef SPDK_CONFIG_VTUNE 00:14:05.392 #define SPDK_CONFIG_VTUNE_DIR 00:14:05.392 #define SPDK_CONFIG_WERROR 1 00:14:05.392 #define SPDK_CONFIG_WPDK_DIR 00:14:05.392 #define SPDK_CONFIG_XNVME 1 00:14:05.392 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:14:05.392 15:07:06 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:14:05.392 15:07:06 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:05.392 15:07:06 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:14:05.392 15:07:06 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:05.392 15:07:06 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:05.392 15:07:06 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:05.392 15:07:06 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.392 15:07:06 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.392 15:07:06 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.392 15:07:06 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:14:05.392 15:07:06 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.392 15:07:06 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:14:05.392 15:07:06 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:14:05.392 15:07:06 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:14:05.392 15:07:06 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:14:05.392 15:07:06 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:14:05.392 15:07:06 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:14:05.392 15:07:06 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:14:05.392 15:07:06 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:14:05.392 15:07:06 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:14:05.392 15:07:06 nvme_xnvme -- pm/common@68 -- # uname -s 00:14:05.392 15:07:06 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:14:05.392 15:07:06 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:14:05.392 15:07:06 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:14:05.392 15:07:06 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:14:05.392 15:07:06 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:14:05.392 15:07:06 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:14:05.392 15:07:06 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:14:05.392 15:07:06 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:14:05.392 15:07:06 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:14:05.392 15:07:06 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:14:05.392 15:07:06 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:14:05.392 15:07:06 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:14:05.392 15:07:06 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:14:05.392 15:07:06 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:14:05.392 15:07:06 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0 00:14:05.392 15:07:06 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:14:05.392 15:07:06 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:14:05.392 15:07:06 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:14:05.392 15:07:06 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:14:05.392 15:07:06 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:14:05.392 15:07:06 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:14:05.392 15:07:06 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:14:05.392 15:07:06 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:14:05.392 15:07:06 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:14:05.392 15:07:06 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:14:05.392 15:07:06 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:14:05.392 15:07:06 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:14:05.392 15:07:06 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:14:05.392 15:07:06 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:14:05.392 15:07:06 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:14:05.392 15:07:06 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:14:05.392 15:07:06 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:14:05.392 15:07:06 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:14:05.392 15:07:06 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:14:05.392 15:07:06 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:14:05.392 15:07:06 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:14:05.392 15:07:06 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:14:05.392 15:07:06 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:14:05.392 15:07:06 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:14:05.392 15:07:06 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:14:05.392 15:07:06 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:14:05.392 15:07:06 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:14:05.392 15:07:06 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:14:05.392 15:07:06 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:14:05.392 15:07:06 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:14:05.392 15:07:06 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:14:05.392 15:07:06 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:14:05.392 15:07:06 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:14:05.392 15:07:06 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:14:05.392 15:07:06 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:14:05.392 15:07:06 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:14:05.392 15:07:06 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:14:05.392 15:07:06 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:14:05.392 15:07:06 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:14:05.392 15:07:06 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:14:05.392 15:07:06 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:14:05.392 15:07:06 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:14:05.392 15:07:06 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:14:05.392 15:07:06 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:14:05.392 15:07:06 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:05.393 15:07:06 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 70167 ]] 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 70167 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.4C2H3V 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.4C2H3V/tests/xnvme /tmp/spdk.4C2H3V 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13977530368 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5590585344 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261657600 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266421248 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493775872 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506571776 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13977530368 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5590585344 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6266273792 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266421248 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=147456 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253269504 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253281792 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=94457741312 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5245038592 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:14:05.394 * Looking for test storage... 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13977530368 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:14:05.394 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@1680 -- # set -o errtrace 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:14:05.394 15:07:06 nvme_xnvme -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:14:05.395 15:07:06 nvme_xnvme -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:14:05.395 15:07:06 nvme_xnvme -- common/autotest_common.sh@1685 -- # true 00:14:05.395 15:07:06 nvme_xnvme -- common/autotest_common.sh@1687 -- # xtrace_fd 00:14:05.395 15:07:06 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:14:05.395 15:07:06 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:14:05.395 15:07:06 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:14:05.395 15:07:06 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:14:05.395 15:07:06 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:14:05.395 15:07:06 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:14:05.395 15:07:06 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:14:05.395 15:07:06 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:14:05.395 15:07:06 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:05.395 15:07:06 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:14:05.395 15:07:06 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:05.655 15:07:06 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:05.655 15:07:06 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:05.655 15:07:06 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:05.655 15:07:06 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:05.655 15:07:06 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:14:05.655 15:07:06 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:14:05.655 15:07:06 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:14:05.655 15:07:06 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:14:05.655 15:07:06 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:14:05.655 15:07:06 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:14:05.655 15:07:06 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:14:05.655 15:07:06 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:05.655 15:07:06 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:14:05.655 15:07:06 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:14:05.655 15:07:06 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:05.655 15:07:06 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:05.655 15:07:06 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:14:05.655 15:07:06 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:14:05.656 15:07:06 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:05.656 15:07:06 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:14:05.656 15:07:06 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:14:05.656 15:07:06 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:14:05.656 15:07:06 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:14:05.656 15:07:06 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:05.656 15:07:06 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:14:05.656 15:07:06 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:14:05.656 15:07:06 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:05.656 15:07:06 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:05.656 15:07:06 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:14:05.656 15:07:06 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:05.656 15:07:06 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:05.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.656 --rc genhtml_branch_coverage=1 00:14:05.656 --rc genhtml_function_coverage=1 00:14:05.656 --rc genhtml_legend=1 00:14:05.656 --rc geninfo_all_blocks=1 00:14:05.656 --rc geninfo_unexecuted_blocks=1 00:14:05.656 00:14:05.656 ' 00:14:05.656 15:07:06 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:05.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.656 --rc genhtml_branch_coverage=1 00:14:05.656 --rc genhtml_function_coverage=1 00:14:05.656 --rc genhtml_legend=1 00:14:05.656 --rc geninfo_all_blocks=1 00:14:05.656 --rc geninfo_unexecuted_blocks=1 00:14:05.656 00:14:05.656 ' 00:14:05.656 15:07:06 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:05.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.656 --rc genhtml_branch_coverage=1 00:14:05.656 --rc genhtml_function_coverage=1 00:14:05.656 --rc genhtml_legend=1 00:14:05.656 --rc geninfo_all_blocks=1 00:14:05.656 --rc geninfo_unexecuted_blocks=1 00:14:05.656 00:14:05.656 ' 00:14:05.656 15:07:06 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:05.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.656 --rc genhtml_branch_coverage=1 00:14:05.656 --rc genhtml_function_coverage=1 00:14:05.656 --rc genhtml_legend=1 00:14:05.656 --rc geninfo_all_blocks=1 00:14:05.656 --rc geninfo_unexecuted_blocks=1 00:14:05.656 00:14:05.656 ' 00:14:05.656 15:07:06 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:05.656 15:07:06 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:14:05.656 15:07:06 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:05.656 15:07:06 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:05.656 15:07:06 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:05.656 15:07:06 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.656 15:07:06 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.656 15:07:06 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.656 15:07:06 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:14:05.656 15:07:06 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.656 15:07:06 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:14:05.656 15:07:06 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:14:05.656 15:07:06 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:14:05.656 15:07:06 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:14:05.656 15:07:06 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:14:05.656 15:07:06 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:14:05.656 15:07:06 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:14:05.656 15:07:06 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:14:05.656 15:07:06 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:14:05.656 15:07:06 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:14:05.656 15:07:06 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:14:05.656 15:07:06 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:14:05.656 15:07:06 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:14:05.656 15:07:06 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:14:05.656 15:07:06 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:14:05.656 15:07:06 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:14:05.656 15:07:06 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:14:05.656 15:07:06 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:14:05.656 15:07:06 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:14:05.656 15:07:06 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:14:05.656 15:07:06 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:14:05.656 15:07:06 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:06.226 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:06.485 Waiting for block devices as requested 00:14:06.485 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:14:06.744 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:14:06.744 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:14:06.744 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:14:12.015 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:14:12.015 15:07:12 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:14:12.582 15:07:13 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:14:12.582 15:07:13 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:14:12.582 15:07:13 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:14:12.582 15:07:13 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:14:12.582 15:07:13 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:14:12.582 15:07:13 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:14:12.582 15:07:13 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:14:12.841 No valid GPT data, bailing 00:14:12.841 15:07:13 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:14:12.841 15:07:13 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:14:12.841 15:07:13 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:14:12.841 15:07:13 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:14:12.841 15:07:13 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:14:12.841 15:07:13 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:14:12.841 15:07:13 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:14:12.841 15:07:13 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:14:12.841 15:07:13 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:14:12.841 15:07:13 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:14:12.841 15:07:13 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:14:12.841 15:07:13 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:14:12.841 15:07:13 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:14:12.841 15:07:13 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:12.841 15:07:13 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:14:12.841 15:07:13 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:14:12.841 15:07:13 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:12.841 15:07:13 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:12.841 15:07:13 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:12.841 15:07:13 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:12.841 ************************************ 00:14:12.841 START TEST xnvme_rpc 00:14:12.841 ************************************ 00:14:12.841 15:07:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:12.841 15:07:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:12.841 15:07:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:12.841 15:07:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:12.841 15:07:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:12.841 15:07:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70562 00:14:12.841 15:07:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70562 00:14:12.841 15:07:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70562 ']' 00:14:12.841 15:07:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:12.841 15:07:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.841 15:07:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:12.841 15:07:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.841 15:07:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:12.841 15:07:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.841 [2024-11-20 15:07:13.624212] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:14:12.841 [2024-11-20 15:07:13.624579] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70562 ] 00:14:13.099 [2024-11-20 15:07:13.815118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.358 [2024-11-20 15:07:13.959204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.296 15:07:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:14.296 15:07:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:14.296 15:07:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:14:14.296 15:07:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.296 15:07:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.296 xnvme_bdev 00:14:14.296 15:07:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.296 15:07:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:14.296 15:07:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:14.296 15:07:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.296 15:07:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:14.296 15:07:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.296 15:07:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.296 15:07:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:14.296 15:07:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:14.296 15:07:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:14.296 15:07:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.296 15:07:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.296 15:07:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:14.296 15:07:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.296 15:07:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:14:14.296 15:07:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:14.296 15:07:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:14.296 15:07:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.296 15:07:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.296 15:07:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:14.296 15:07:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.296 15:07:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:14:14.296 15:07:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:14.296 15:07:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:14.296 15:07:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:14.296 15:07:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.296 15:07:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.554 15:07:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.554 15:07:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:14:14.554 15:07:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:14.554 15:07:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.554 15:07:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.554 15:07:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.554 15:07:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70562 00:14:14.554 15:07:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70562 ']' 00:14:14.554 15:07:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70562 00:14:14.554 15:07:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:14.554 15:07:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:14.554 15:07:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70562 00:14:14.554 15:07:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:14.554 killing process with pid 70562 00:14:14.554 15:07:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:14.555 15:07:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70562' 00:14:14.555 15:07:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70562 00:14:14.555 15:07:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70562 00:14:17.159 00:14:17.159 real 0m4.349s 00:14:17.159 user 0m4.217s 00:14:17.159 sys 0m0.720s 00:14:17.159 15:07:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:17.159 ************************************ 00:14:17.159 END TEST xnvme_rpc 00:14:17.159 ************************************ 00:14:17.159 15:07:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.159 15:07:17 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:17.159 15:07:17 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:17.159 15:07:17 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:17.159 15:07:17 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:17.159 ************************************ 00:14:17.159 START TEST xnvme_bdevperf 00:14:17.159 ************************************ 00:14:17.159 15:07:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:17.159 15:07:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:17.159 15:07:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:14:17.159 15:07:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:17.159 15:07:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:17.159 15:07:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:17.159 15:07:17 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:17.159 15:07:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:17.159 { 00:14:17.159 "subsystems": [ 00:14:17.159 { 00:14:17.159 "subsystem": "bdev", 00:14:17.159 "config": [ 00:14:17.159 { 00:14:17.159 "params": { 00:14:17.159 "io_mechanism": "libaio", 00:14:17.159 "conserve_cpu": false, 00:14:17.159 "filename": "/dev/nvme0n1", 00:14:17.159 "name": "xnvme_bdev" 00:14:17.159 }, 00:14:17.159 "method": "bdev_xnvme_create" 00:14:17.159 }, 00:14:17.159 { 00:14:17.159 "method": "bdev_wait_for_examine" 00:14:17.159 } 00:14:17.159 ] 00:14:17.159 } 00:14:17.159 ] 00:14:17.159 } 00:14:17.417 [2024-11-20 15:07:18.031334] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:14:17.417 [2024-11-20 15:07:18.031463] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70647 ] 00:14:17.417 [2024-11-20 15:07:18.213986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:17.675 [2024-11-20 15:07:18.359588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.239 Running I/O for 5 seconds... 00:14:20.123 41190.00 IOPS, 160.90 MiB/s [2024-11-20T15:07:21.896Z] 35914.00 IOPS, 140.29 MiB/s [2024-11-20T15:07:22.831Z] 34628.67 IOPS, 135.27 MiB/s [2024-11-20T15:07:24.210Z] 35807.00 IOPS, 139.87 MiB/s 00:14:23.374 Latency(us) 00:14:23.374 [2024-11-20T15:07:24.210Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:23.374 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:23.374 xnvme_bdev : 5.00 36790.78 143.71 0.00 0.00 1736.06 202.33 8474.94 00:14:23.374 [2024-11-20T15:07:24.210Z] =================================================================================================================== 00:14:23.374 [2024-11-20T15:07:24.210Z] Total : 36790.78 143.71 0.00 0.00 1736.06 202.33 8474.94 00:14:24.311 15:07:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:24.311 15:07:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:24.311 15:07:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:24.311 15:07:24 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:24.311 15:07:24 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:24.311 { 00:14:24.311 "subsystems": [ 00:14:24.311 { 00:14:24.311 "subsystem": "bdev", 00:14:24.311 "config": [ 00:14:24.311 { 00:14:24.311 "params": { 00:14:24.311 "io_mechanism": "libaio", 00:14:24.311 "conserve_cpu": false, 00:14:24.311 "filename": "/dev/nvme0n1", 00:14:24.311 "name": "xnvme_bdev" 00:14:24.311 }, 00:14:24.311 "method": "bdev_xnvme_create" 00:14:24.311 }, 00:14:24.311 { 00:14:24.311 "method": "bdev_wait_for_examine" 00:14:24.311 } 00:14:24.311 ] 00:14:24.311 } 00:14:24.311 ] 00:14:24.311 } 00:14:24.311 [2024-11-20 15:07:25.087408] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:14:24.311 [2024-11-20 15:07:25.087702] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70728 ] 00:14:24.572 [2024-11-20 15:07:25.260933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.830 [2024-11-20 15:07:25.429475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:25.089 Running I/O for 5 seconds... 00:14:27.404 48313.00 IOPS, 188.72 MiB/s [2024-11-20T15:07:28.809Z] 47326.50 IOPS, 184.87 MiB/s [2024-11-20T15:07:30.205Z] 44785.33 IOPS, 174.94 MiB/s [2024-11-20T15:07:31.152Z] 44664.75 IOPS, 174.47 MiB/s [2024-11-20T15:07:31.152Z] 45143.20 IOPS, 176.34 MiB/s 00:14:30.316 Latency(us) 00:14:30.316 [2024-11-20T15:07:31.152Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:30.316 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:30.316 xnvme_bdev : 5.00 45129.65 176.29 0.00 0.00 1415.14 204.80 6211.44 00:14:30.316 [2024-11-20T15:07:31.152Z] =================================================================================================================== 00:14:30.316 [2024-11-20T15:07:31.152Z] Total : 45129.65 176.29 0.00 0.00 1415.14 204.80 6211.44 00:14:31.258 00:14:31.258 real 0m14.039s 00:14:31.258 user 0m5.225s 00:14:31.258 sys 0m6.515s 00:14:31.258 15:07:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:31.258 15:07:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:31.258 ************************************ 00:14:31.258 END TEST xnvme_bdevperf 00:14:31.258 ************************************ 00:14:31.258 15:07:32 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:31.258 15:07:32 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:31.258 15:07:32 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:31.258 15:07:32 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:31.258 ************************************ 00:14:31.258 START TEST xnvme_fio_plugin 00:14:31.258 ************************************ 00:14:31.258 15:07:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:31.258 15:07:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:31.258 15:07:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:14:31.258 15:07:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:31.258 15:07:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:31.258 15:07:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:31.258 15:07:32 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:31.258 15:07:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:31.258 15:07:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:31.258 15:07:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:31.258 15:07:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:31.258 15:07:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:31.258 15:07:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:31.258 15:07:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:31.258 15:07:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:31.258 15:07:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:31.258 15:07:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:31.258 15:07:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:31.258 15:07:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:31.517 { 00:14:31.517 "subsystems": [ 00:14:31.517 { 00:14:31.517 "subsystem": "bdev", 00:14:31.517 "config": [ 00:14:31.517 { 00:14:31.517 "params": { 00:14:31.517 "io_mechanism": "libaio", 00:14:31.517 "conserve_cpu": false, 00:14:31.517 "filename": "/dev/nvme0n1", 00:14:31.517 "name": "xnvme_bdev" 00:14:31.517 }, 00:14:31.517 "method": "bdev_xnvme_create" 00:14:31.517 }, 00:14:31.517 { 00:14:31.517 "method": "bdev_wait_for_examine" 00:14:31.517 } 00:14:31.517 ] 00:14:31.517 } 00:14:31.517 ] 00:14:31.517 } 00:14:31.517 15:07:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:31.517 15:07:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:31.517 15:07:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:31.517 15:07:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:31.517 15:07:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:31.517 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:31.517 fio-3.35 00:14:31.517 Starting 1 thread 00:14:38.091 00:14:38.091 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70858: Wed Nov 20 15:07:38 2024 00:14:38.091 read: IOPS=39.0k, BW=152MiB/s (160MB/s)(762MiB/5001msec) 00:14:38.091 slat (usec): min=4, max=1395, avg=22.70, stdev=31.47 00:14:38.091 clat (usec): min=55, max=10133, avg=943.40, stdev=614.67 00:14:38.091 lat (usec): min=126, max=10189, avg=966.11, stdev=619.47 00:14:38.091 clat percentiles (usec): 00:14:38.091 | 1.00th=[ 172], 5.00th=[ 258], 10.00th=[ 330], 20.00th=[ 465], 00:14:38.091 | 30.00th=[ 594], 40.00th=[ 725], 50.00th=[ 848], 60.00th=[ 971], 00:14:38.091 | 70.00th=[ 1106], 80.00th=[ 1270], 90.00th=[ 1565], 95.00th=[ 1991], 00:14:38.091 | 99.00th=[ 3458], 99.50th=[ 4015], 99.90th=[ 4883], 99.95th=[ 5211], 00:14:38.091 | 99.99th=[ 5866] 00:14:38.091 bw ( KiB/s): min=134024, max=174976, per=98.07%, avg=152915.00, stdev=13411.45, samples=9 00:14:38.091 iops : min=33506, max=43744, avg=38228.67, stdev=3352.77, samples=9 00:14:38.091 lat (usec) : 100=0.07%, 250=4.52%, 500=18.13%, 750=19.46%, 1000=19.92% 00:14:38.091 lat (msec) : 2=32.96%, 4=4.45%, 10=0.51%, 20=0.01% 00:14:38.091 cpu : usr=23.76%, sys=57.26%, ctx=46, majf=0, minf=764 00:14:38.091 IO depths : 1=0.1%, 2=1.2%, 4=4.3%, 8=11.4%, 16=25.9%, 32=55.3%, >=64=1.8% 00:14:38.091 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:38.091 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:14:38.091 issued rwts: total=194954,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:38.091 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:38.091 00:14:38.091 Run status group 0 (all jobs): 00:14:38.091 READ: bw=152MiB/s (160MB/s), 152MiB/s-152MiB/s (160MB/s-160MB/s), io=762MiB (799MB), run=5001-5001msec 00:14:39.074 ----------------------------------------------------- 00:14:39.074 Suppressions used: 00:14:39.074 count bytes template 00:14:39.074 1 11 /usr/src/fio/parse.c 00:14:39.074 1 8 libtcmalloc_minimal.so 00:14:39.074 1 904 libcrypto.so 00:14:39.074 ----------------------------------------------------- 00:14:39.074 00:14:39.074 15:07:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:39.074 15:07:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:39.074 15:07:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:39.074 15:07:39 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:39.074 15:07:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:39.074 15:07:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:39.074 15:07:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:39.074 15:07:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:39.074 15:07:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:39.074 15:07:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:39.074 15:07:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:39.074 15:07:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:39.074 15:07:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:39.074 15:07:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:39.074 15:07:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:39.074 15:07:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:39.074 15:07:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:39.074 15:07:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:39.074 15:07:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:39.074 15:07:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:39.074 15:07:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:39.074 { 00:14:39.074 "subsystems": [ 00:14:39.074 { 00:14:39.074 "subsystem": "bdev", 00:14:39.074 "config": [ 00:14:39.074 { 00:14:39.074 "params": { 00:14:39.074 "io_mechanism": "libaio", 00:14:39.074 "conserve_cpu": false, 00:14:39.074 "filename": "/dev/nvme0n1", 00:14:39.074 "name": "xnvme_bdev" 00:14:39.074 }, 00:14:39.074 "method": "bdev_xnvme_create" 00:14:39.074 }, 00:14:39.074 { 00:14:39.074 "method": "bdev_wait_for_examine" 00:14:39.074 } 00:14:39.074 ] 00:14:39.074 } 00:14:39.074 ] 00:14:39.074 } 00:14:39.074 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:39.074 fio-3.35 00:14:39.074 Starting 1 thread 00:14:45.651 00:14:45.651 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70950: Wed Nov 20 15:07:45 2024 00:14:45.651 write: IOPS=53.3k, BW=208MiB/s (218MB/s)(1040MiB/5001msec); 0 zone resets 00:14:45.651 slat (usec): min=4, max=742, avg=16.52, stdev=27.78 00:14:45.651 clat (usec): min=85, max=5711, avg=706.31, stdev=333.45 00:14:45.651 lat (usec): min=98, max=5858, avg=722.83, stdev=331.81 00:14:45.651 clat percentiles (usec): 00:14:45.651 | 1.00th=[ 159], 5.00th=[ 241], 10.00th=[ 289], 20.00th=[ 392], 00:14:45.651 | 30.00th=[ 490], 40.00th=[ 586], 50.00th=[ 685], 60.00th=[ 783], 00:14:45.651 | 70.00th=[ 889], 80.00th=[ 996], 90.00th=[ 1123], 95.00th=[ 1237], 00:14:45.651 | 99.00th=[ 1467], 99.50th=[ 1614], 99.90th=[ 2671], 99.95th=[ 3228], 00:14:45.651 | 99.99th=[ 4424] 00:14:45.651 bw ( KiB/s): min=189200, max=231688, per=100.00%, avg=213060.44, stdev=16424.23, samples=9 00:14:45.651 iops : min=47300, max=57922, avg=53265.11, stdev=4106.06, samples=9 00:14:45.651 lat (usec) : 100=0.11%, 250=5.76%, 500=25.17%, 750=25.46%, 1000=23.82% 00:14:45.651 lat (msec) : 2=19.45%, 4=0.20%, 10=0.02% 00:14:45.651 cpu : usr=25.84%, sys=60.34%, ctx=398, majf=0, minf=765 00:14:45.651 IO depths : 1=0.1%, 2=0.8%, 4=3.6%, 8=11.3%, 16=26.8%, 32=55.7%, >=64=1.7% 00:14:45.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:45.651 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:14:45.651 issued rwts: total=0,266356,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:45.651 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:45.651 00:14:45.651 Run status group 0 (all jobs): 00:14:45.651 WRITE: bw=208MiB/s (218MB/s), 208MiB/s-208MiB/s (218MB/s-218MB/s), io=1040MiB (1091MB), run=5001-5001msec 00:14:46.217 ----------------------------------------------------- 00:14:46.217 Suppressions used: 00:14:46.217 count bytes template 00:14:46.217 1 11 /usr/src/fio/parse.c 00:14:46.217 1 8 libtcmalloc_minimal.so 00:14:46.217 1 904 libcrypto.so 00:14:46.217 ----------------------------------------------------- 00:14:46.217 00:14:46.217 00:14:46.217 real 0m15.012s 00:14:46.217 user 0m6.310s 00:14:46.217 sys 0m6.735s 00:14:46.217 15:07:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:46.217 15:07:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:46.217 ************************************ 00:14:46.217 END TEST xnvme_fio_plugin 00:14:46.217 ************************************ 00:14:46.475 15:07:47 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:46.475 15:07:47 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:14:46.475 15:07:47 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:14:46.475 15:07:47 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:46.475 15:07:47 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:46.475 15:07:47 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:46.475 15:07:47 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:46.475 ************************************ 00:14:46.475 START TEST xnvme_rpc 00:14:46.475 ************************************ 00:14:46.475 15:07:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:46.475 15:07:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:46.475 15:07:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:46.475 15:07:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:46.475 15:07:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:46.475 15:07:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71042 00:14:46.475 15:07:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71042 00:14:46.475 15:07:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:46.475 15:07:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71042 ']' 00:14:46.475 15:07:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.475 15:07:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:46.475 15:07:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.475 15:07:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:46.475 15:07:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:46.475 [2024-11-20 15:07:47.235745] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:14:46.475 [2024-11-20 15:07:47.236165] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71042 ] 00:14:46.733 [2024-11-20 15:07:47.418398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.733 [2024-11-20 15:07:47.533641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:47.669 15:07:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:47.669 15:07:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:47.669 15:07:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:14:47.669 15:07:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.669 15:07:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.669 xnvme_bdev 00:14:47.669 15:07:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.669 15:07:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:47.669 15:07:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:47.669 15:07:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:47.669 15:07:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.669 15:07:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.669 15:07:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.669 15:07:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:47.669 15:07:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:47.669 15:07:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:47.669 15:07:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:47.669 15:07:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.669 15:07:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.669 15:07:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.669 15:07:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:14:47.928 15:07:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:47.928 15:07:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:47.928 15:07:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:47.928 15:07:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.928 15:07:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.928 15:07:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.928 15:07:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:14:47.928 15:07:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:47.928 15:07:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:47.928 15:07:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:47.928 15:07:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.928 15:07:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.928 15:07:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.928 15:07:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:14:47.928 15:07:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:47.928 15:07:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.928 15:07:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.928 15:07:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.928 15:07:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71042 00:14:47.928 15:07:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71042 ']' 00:14:47.928 15:07:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71042 00:14:47.928 15:07:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:47.928 15:07:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:47.928 15:07:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71042 00:14:47.928 15:07:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:47.928 15:07:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:47.928 killing process with pid 71042 00:14:47.929 15:07:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71042' 00:14:47.929 15:07:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71042 00:14:47.929 15:07:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71042 00:14:50.461 00:14:50.461 real 0m3.877s 00:14:50.461 user 0m3.934s 00:14:50.461 sys 0m0.562s 00:14:50.461 15:07:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:50.461 15:07:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:50.461 ************************************ 00:14:50.461 END TEST xnvme_rpc 00:14:50.461 ************************************ 00:14:50.461 15:07:51 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:50.461 15:07:51 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:50.461 15:07:51 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:50.461 15:07:51 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:50.461 ************************************ 00:14:50.461 START TEST xnvme_bdevperf 00:14:50.461 ************************************ 00:14:50.461 15:07:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:50.461 15:07:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:50.461 15:07:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:14:50.461 15:07:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:50.461 15:07:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:50.461 15:07:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:50.461 15:07:51 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:50.461 15:07:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:50.461 { 00:14:50.461 "subsystems": [ 00:14:50.461 { 00:14:50.461 "subsystem": "bdev", 00:14:50.461 "config": [ 00:14:50.461 { 00:14:50.461 "params": { 00:14:50.461 "io_mechanism": "libaio", 00:14:50.461 "conserve_cpu": true, 00:14:50.461 "filename": "/dev/nvme0n1", 00:14:50.461 "name": "xnvme_bdev" 00:14:50.461 }, 00:14:50.461 "method": "bdev_xnvme_create" 00:14:50.461 }, 00:14:50.461 { 00:14:50.461 "method": "bdev_wait_for_examine" 00:14:50.461 } 00:14:50.461 ] 00:14:50.461 } 00:14:50.461 ] 00:14:50.461 } 00:14:50.461 [2024-11-20 15:07:51.169642] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:14:50.461 [2024-11-20 15:07:51.169774] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71126 ] 00:14:50.721 [2024-11-20 15:07:51.349274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:50.721 [2024-11-20 15:07:51.455129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.979 Running I/O for 5 seconds... 00:14:53.287 42922.00 IOPS, 167.66 MiB/s [2024-11-20T15:07:55.056Z] 42025.50 IOPS, 164.16 MiB/s [2024-11-20T15:07:55.993Z] 41638.33 IOPS, 162.65 MiB/s [2024-11-20T15:07:56.925Z] 41609.75 IOPS, 162.54 MiB/s 00:14:56.089 Latency(us) 00:14:56.089 [2024-11-20T15:07:56.925Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:56.089 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:56.089 xnvme_bdev : 5.00 42520.97 166.10 0.00 0.00 1502.02 164.50 3184.68 00:14:56.089 [2024-11-20T15:07:56.925Z] =================================================================================================================== 00:14:56.089 [2024-11-20T15:07:56.925Z] Total : 42520.97 166.10 0.00 0.00 1502.02 164.50 3184.68 00:14:57.469 15:07:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:57.469 15:07:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:57.469 15:07:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:57.469 15:07:57 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:57.469 15:07:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:57.469 { 00:14:57.469 "subsystems": [ 00:14:57.469 { 00:14:57.469 "subsystem": "bdev", 00:14:57.469 "config": [ 00:14:57.469 { 00:14:57.469 "params": { 00:14:57.469 "io_mechanism": "libaio", 00:14:57.469 "conserve_cpu": true, 00:14:57.469 "filename": "/dev/nvme0n1", 00:14:57.469 "name": "xnvme_bdev" 00:14:57.469 }, 00:14:57.469 "method": "bdev_xnvme_create" 00:14:57.469 }, 00:14:57.469 { 00:14:57.469 "method": "bdev_wait_for_examine" 00:14:57.469 } 00:14:57.469 ] 00:14:57.469 } 00:14:57.469 ] 00:14:57.469 } 00:14:57.469 [2024-11-20 15:07:58.028852] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:14:57.469 [2024-11-20 15:07:58.028979] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71203 ] 00:14:57.469 [2024-11-20 15:07:58.210664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.727 [2024-11-20 15:07:58.319659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.986 Running I/O for 5 seconds... 00:14:59.865 42188.00 IOPS, 164.80 MiB/s [2024-11-20T15:08:02.079Z] 42112.50 IOPS, 164.50 MiB/s [2024-11-20T15:08:03.014Z] 42325.00 IOPS, 165.33 MiB/s [2024-11-20T15:08:03.951Z] 42369.25 IOPS, 165.50 MiB/s 00:15:03.115 Latency(us) 00:15:03.115 [2024-11-20T15:08:03.951Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:03.115 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:03.115 xnvme_bdev : 5.00 42328.21 165.34 0.00 0.00 1508.51 155.45 3303.12 00:15:03.115 [2024-11-20T15:08:03.951Z] =================================================================================================================== 00:15:03.115 [2024-11-20T15:08:03.951Z] Total : 42328.21 165.34 0.00 0.00 1508.51 155.45 3303.12 00:15:04.053 00:15:04.053 real 0m13.724s 00:15:04.053 user 0m4.879s 00:15:04.053 sys 0m5.911s 00:15:04.053 15:08:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:04.053 15:08:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:04.053 ************************************ 00:15:04.053 END TEST xnvme_bdevperf 00:15:04.053 ************************************ 00:15:04.053 15:08:04 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:04.053 15:08:04 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:04.053 15:08:04 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:04.053 15:08:04 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:04.053 ************************************ 00:15:04.053 START TEST xnvme_fio_plugin 00:15:04.053 ************************************ 00:15:04.053 15:08:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:04.053 15:08:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:04.053 15:08:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:15:04.053 15:08:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:04.053 15:08:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:04.053 15:08:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:04.053 15:08:04 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:04.053 15:08:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:04.053 15:08:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:04.053 15:08:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:04.053 15:08:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:04.053 15:08:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:04.053 15:08:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:04.053 15:08:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:04.053 15:08:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:04.053 15:08:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:04.053 15:08:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:04.053 15:08:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:04.053 15:08:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:04.313 15:08:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:04.313 15:08:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:04.313 15:08:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:04.313 15:08:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:04.313 15:08:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:04.313 { 00:15:04.313 "subsystems": [ 00:15:04.313 { 00:15:04.313 "subsystem": "bdev", 00:15:04.313 "config": [ 00:15:04.313 { 00:15:04.313 "params": { 00:15:04.313 "io_mechanism": "libaio", 00:15:04.313 "conserve_cpu": true, 00:15:04.313 "filename": "/dev/nvme0n1", 00:15:04.313 "name": "xnvme_bdev" 00:15:04.313 }, 00:15:04.313 "method": "bdev_xnvme_create" 00:15:04.313 }, 00:15:04.313 { 00:15:04.313 "method": "bdev_wait_for_examine" 00:15:04.313 } 00:15:04.313 ] 00:15:04.313 } 00:15:04.313 ] 00:15:04.313 } 00:15:04.313 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:04.313 fio-3.35 00:15:04.313 Starting 1 thread 00:15:10.886 00:15:10.886 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71333: Wed Nov 20 15:08:10 2024 00:15:10.886 read: IOPS=34.6k, BW=135MiB/s (142MB/s)(676MiB/5001msec) 00:15:10.886 slat (usec): min=4, max=1454, avg=25.65, stdev=45.10 00:15:10.886 clat (usec): min=74, max=10633, avg=1067.33, stdev=653.61 00:15:10.886 lat (usec): min=79, max=10684, avg=1092.98, stdev=656.96 00:15:10.886 clat percentiles (usec): 00:15:10.886 | 1.00th=[ 188], 5.00th=[ 297], 10.00th=[ 383], 20.00th=[ 537], 00:15:10.887 | 30.00th=[ 685], 40.00th=[ 824], 50.00th=[ 963], 60.00th=[ 1123], 00:15:10.887 | 70.00th=[ 1287], 80.00th=[ 1500], 90.00th=[ 1795], 95.00th=[ 2114], 00:15:10.887 | 99.00th=[ 3490], 99.50th=[ 4178], 99.90th=[ 5407], 99.95th=[ 5866], 00:15:10.887 | 99.99th=[ 6915] 00:15:10.887 bw ( KiB/s): min=113664, max=177912, per=100.00%, avg=139267.56, stdev=18061.09, samples=9 00:15:10.887 iops : min=28416, max=44478, avg=34816.89, stdev=4515.27, samples=9 00:15:10.887 lat (usec) : 100=0.08%, 250=2.67%, 500=14.82%, 750=17.32%, 1000=17.63% 00:15:10.887 lat (msec) : 2=41.15%, 4=5.69%, 10=0.63%, 20=0.01% 00:15:10.887 cpu : usr=19.94%, sys=62.16%, ctx=221, majf=0, minf=764 00:15:10.887 IO depths : 1=0.2%, 2=1.1%, 4=4.1%, 8=11.2%, 16=25.8%, 32=55.8%, >=64=1.8% 00:15:10.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:10.887 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:15:10.887 issued rwts: total=173103,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:10.887 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:10.887 00:15:10.887 Run status group 0 (all jobs): 00:15:10.887 READ: bw=135MiB/s (142MB/s), 135MiB/s-135MiB/s (142MB/s-142MB/s), io=676MiB (709MB), run=5001-5001msec 00:15:11.454 ----------------------------------------------------- 00:15:11.454 Suppressions used: 00:15:11.454 count bytes template 00:15:11.454 1 11 /usr/src/fio/parse.c 00:15:11.454 1 8 libtcmalloc_minimal.so 00:15:11.454 1 904 libcrypto.so 00:15:11.454 ----------------------------------------------------- 00:15:11.454 00:15:11.714 15:08:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:11.714 15:08:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:11.714 15:08:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:11.714 15:08:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:11.714 15:08:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:11.714 15:08:12 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:11.714 15:08:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:11.714 15:08:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:11.714 15:08:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:11.714 15:08:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:11.714 15:08:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:11.714 15:08:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:11.714 15:08:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:11.714 15:08:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:11.714 15:08:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:11.714 15:08:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:11.714 15:08:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:11.714 15:08:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:11.714 15:08:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:11.714 15:08:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:11.714 15:08:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:11.714 { 00:15:11.714 "subsystems": [ 00:15:11.714 { 00:15:11.714 "subsystem": "bdev", 00:15:11.714 "config": [ 00:15:11.714 { 00:15:11.714 "params": { 00:15:11.714 "io_mechanism": "libaio", 00:15:11.714 "conserve_cpu": true, 00:15:11.714 "filename": "/dev/nvme0n1", 00:15:11.714 "name": "xnvme_bdev" 00:15:11.714 }, 00:15:11.714 "method": "bdev_xnvme_create" 00:15:11.714 }, 00:15:11.714 { 00:15:11.714 "method": "bdev_wait_for_examine" 00:15:11.714 } 00:15:11.714 ] 00:15:11.714 } 00:15:11.714 ] 00:15:11.714 } 00:15:11.714 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:11.714 fio-3.35 00:15:11.714 Starting 1 thread 00:15:18.283 00:15:18.283 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71431: Wed Nov 20 15:08:18 2024 00:15:18.283 write: IOPS=37.9k, BW=148MiB/s (155MB/s)(739MiB/5001msec); 0 zone resets 00:15:18.283 slat (usec): min=4, max=2661, avg=22.74, stdev=31.89 00:15:18.283 clat (usec): min=12, max=44991, avg=1003.95, stdev=1142.22 00:15:18.283 lat (usec): min=61, max=45035, avg=1026.69, stdev=1144.79 00:15:18.283 clat percentiles (usec): 00:15:18.283 | 1.00th=[ 184], 5.00th=[ 273], 10.00th=[ 351], 20.00th=[ 490], 00:15:18.283 | 30.00th=[ 619], 40.00th=[ 750], 50.00th=[ 873], 60.00th=[ 1004], 00:15:18.283 | 70.00th=[ 1139], 80.00th=[ 1287], 90.00th=[ 1549], 95.00th=[ 2040], 00:15:18.283 | 99.00th=[ 3851], 99.50th=[ 4555], 99.90th=[13960], 99.95th=[16450], 00:15:18.283 | 99.99th=[44303] 00:15:18.283 bw ( KiB/s): min=129736, max=175104, per=99.58%, avg=150767.00, stdev=15732.87, samples=9 00:15:18.283 iops : min=32434, max=43776, avg=37691.67, stdev=3933.21, samples=9 00:15:18.283 lat (usec) : 20=0.01%, 50=0.01%, 100=0.09%, 250=3.81%, 500=16.83% 00:15:18.283 lat (usec) : 750=19.32%, 1000=19.85% 00:15:18.283 lat (msec) : 2=34.89%, 4=4.36%, 10=0.66%, 20=0.15%, 50=0.03% 00:15:18.283 cpu : usr=26.80%, sys=53.80%, ctx=162, majf=0, minf=765 00:15:18.283 IO depths : 1=0.1%, 2=0.9%, 4=4.2%, 8=11.5%, 16=26.0%, 32=55.5%, >=64=1.8% 00:15:18.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:18.283 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:15:18.283 issued rwts: total=0,189298,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:18.283 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:18.283 00:15:18.283 Run status group 0 (all jobs): 00:15:18.283 WRITE: bw=148MiB/s (155MB/s), 148MiB/s-148MiB/s (155MB/s-155MB/s), io=739MiB (775MB), run=5001-5001msec 00:15:18.851 ----------------------------------------------------- 00:15:18.851 Suppressions used: 00:15:18.851 count bytes template 00:15:18.851 1 11 /usr/src/fio/parse.c 00:15:18.851 1 8 libtcmalloc_minimal.so 00:15:18.851 1 904 libcrypto.so 00:15:18.851 ----------------------------------------------------- 00:15:18.851 00:15:18.851 00:15:18.852 real 0m14.809s 00:15:18.852 user 0m6.034s 00:15:18.852 sys 0m6.566s 00:15:18.852 15:08:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:18.852 15:08:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:18.852 ************************************ 00:15:18.852 END TEST xnvme_fio_plugin 00:15:18.852 ************************************ 00:15:19.110 15:08:19 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:15:19.110 15:08:19 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:15:19.110 15:08:19 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:15:19.110 15:08:19 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:15:19.110 15:08:19 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:15:19.110 15:08:19 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:15:19.110 15:08:19 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:15:19.110 15:08:19 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:15:19.110 15:08:19 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:15:19.110 15:08:19 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:19.110 15:08:19 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:19.110 15:08:19 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:19.110 ************************************ 00:15:19.110 START TEST xnvme_rpc 00:15:19.110 ************************************ 00:15:19.110 15:08:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:15:19.110 15:08:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:15:19.110 15:08:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:15:19.110 15:08:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:15:19.110 15:08:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:15:19.110 15:08:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71517 00:15:19.110 15:08:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:19.111 15:08:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71517 00:15:19.111 15:08:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71517 ']' 00:15:19.111 15:08:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:19.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:19.111 15:08:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:19.111 15:08:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:19.111 15:08:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:19.111 15:08:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.111 [2024-11-20 15:08:19.865615] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:15:19.111 [2024-11-20 15:08:19.865752] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71517 ] 00:15:19.369 [2024-11-20 15:08:20.047777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:19.369 [2024-11-20 15:08:20.160419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.308 15:08:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:20.308 15:08:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:20.308 15:08:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:15:20.308 15:08:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.308 15:08:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.308 xnvme_bdev 00:15:20.308 15:08:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.308 15:08:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:15:20.308 15:08:21 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:20.308 15:08:21 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:15:20.308 15:08:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.308 15:08:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.308 15:08:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.308 15:08:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:15:20.308 15:08:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:15:20.308 15:08:21 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:15:20.308 15:08:21 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:20.308 15:08:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.308 15:08:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.308 15:08:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.567 15:08:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:15:20.567 15:08:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:15:20.567 15:08:21 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:20.567 15:08:21 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:15:20.567 15:08:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.567 15:08:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.567 15:08:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.567 15:08:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:15:20.567 15:08:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:15:20.567 15:08:21 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:15:20.567 15:08:21 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:20.567 15:08:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.567 15:08:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.567 15:08:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.568 15:08:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:15:20.568 15:08:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:15:20.568 15:08:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.568 15:08:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.568 15:08:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.568 15:08:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71517 00:15:20.568 15:08:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71517 ']' 00:15:20.568 15:08:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71517 00:15:20.568 15:08:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:15:20.568 15:08:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:20.568 15:08:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71517 00:15:20.568 killing process with pid 71517 00:15:20.568 15:08:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:20.568 15:08:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:20.568 15:08:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71517' 00:15:20.568 15:08:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71517 00:15:20.568 15:08:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71517 00:15:23.104 ************************************ 00:15:23.104 END TEST xnvme_rpc 00:15:23.104 ************************************ 00:15:23.104 00:15:23.104 real 0m4.131s 00:15:23.104 user 0m4.184s 00:15:23.104 sys 0m0.551s 00:15:23.104 15:08:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:23.104 15:08:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:23.363 15:08:23 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:23.363 15:08:23 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:23.363 15:08:23 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:23.363 15:08:23 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:23.363 ************************************ 00:15:23.363 START TEST xnvme_bdevperf 00:15:23.363 ************************************ 00:15:23.363 15:08:23 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:15:23.363 15:08:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:15:23.363 15:08:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:15:23.363 15:08:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:23.363 15:08:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:15:23.363 15:08:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:23.363 15:08:23 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:23.363 15:08:23 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:23.363 { 00:15:23.363 "subsystems": [ 00:15:23.363 { 00:15:23.363 "subsystem": "bdev", 00:15:23.363 "config": [ 00:15:23.363 { 00:15:23.363 "params": { 00:15:23.363 "io_mechanism": "io_uring", 00:15:23.363 "conserve_cpu": false, 00:15:23.363 "filename": "/dev/nvme0n1", 00:15:23.363 "name": "xnvme_bdev" 00:15:23.363 }, 00:15:23.363 "method": "bdev_xnvme_create" 00:15:23.363 }, 00:15:23.363 { 00:15:23.363 "method": "bdev_wait_for_examine" 00:15:23.363 } 00:15:23.363 ] 00:15:23.363 } 00:15:23.363 ] 00:15:23.363 } 00:15:23.363 [2024-11-20 15:08:24.072541] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:15:23.363 [2024-11-20 15:08:24.072982] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71602 ] 00:15:23.622 [2024-11-20 15:08:24.260521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.622 [2024-11-20 15:08:24.404394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:24.243 Running I/O for 5 seconds... 00:15:26.116 30272.00 IOPS, 118.25 MiB/s [2024-11-20T15:08:27.890Z] 30144.00 IOPS, 117.75 MiB/s [2024-11-20T15:08:29.268Z] 28053.33 IOPS, 109.58 MiB/s [2024-11-20T15:08:29.836Z] 26848.00 IOPS, 104.88 MiB/s 00:15:29.000 Latency(us) 00:15:29.000 [2024-11-20T15:08:29.836Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:29.000 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:29.000 xnvme_bdev : 5.00 28796.18 112.49 0.00 0.00 2216.01 914.61 7685.35 00:15:29.000 [2024-11-20T15:08:29.836Z] =================================================================================================================== 00:15:29.000 [2024-11-20T15:08:29.836Z] Total : 28796.18 112.49 0.00 0.00 2216.01 914.61 7685.35 00:15:30.380 15:08:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:30.380 15:08:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:15:30.380 15:08:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:30.380 15:08:31 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:30.380 15:08:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:30.380 { 00:15:30.380 "subsystems": [ 00:15:30.380 { 00:15:30.380 "subsystem": "bdev", 00:15:30.380 "config": [ 00:15:30.380 { 00:15:30.380 "params": { 00:15:30.380 "io_mechanism": "io_uring", 00:15:30.380 "conserve_cpu": false, 00:15:30.380 "filename": "/dev/nvme0n1", 00:15:30.380 "name": "xnvme_bdev" 00:15:30.380 }, 00:15:30.380 "method": "bdev_xnvme_create" 00:15:30.380 }, 00:15:30.380 { 00:15:30.380 "method": "bdev_wait_for_examine" 00:15:30.380 } 00:15:30.380 ] 00:15:30.380 } 00:15:30.380 ] 00:15:30.380 } 00:15:30.380 [2024-11-20 15:08:31.174697] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:15:30.380 [2024-11-20 15:08:31.175092] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71683 ] 00:15:30.639 [2024-11-20 15:08:31.365179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.899 [2024-11-20 15:08:31.513916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.159 Running I/O for 5 seconds... 00:15:33.473 32896.00 IOPS, 128.50 MiB/s [2024-11-20T15:08:35.244Z] 32480.00 IOPS, 126.88 MiB/s [2024-11-20T15:08:36.182Z] 31978.67 IOPS, 124.92 MiB/s [2024-11-20T15:08:37.118Z] 31024.00 IOPS, 121.19 MiB/s [2024-11-20T15:08:37.118Z] 30771.20 IOPS, 120.20 MiB/s 00:15:36.282 Latency(us) 00:15:36.282 [2024-11-20T15:08:37.118Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:36.282 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:36.282 xnvme_bdev : 5.01 30728.36 120.03 0.00 0.00 2076.80 1289.66 7948.54 00:15:36.282 [2024-11-20T15:08:37.118Z] =================================================================================================================== 00:15:36.282 [2024-11-20T15:08:37.118Z] Total : 30728.36 120.03 0.00 0.00 2076.80 1289.66 7948.54 00:15:37.659 00:15:37.659 real 0m14.191s 00:15:37.659 user 0m6.793s 00:15:37.659 sys 0m7.175s 00:15:37.659 15:08:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:37.659 ************************************ 00:15:37.659 END TEST xnvme_bdevperf 00:15:37.659 15:08:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:37.659 ************************************ 00:15:37.659 15:08:38 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:37.659 15:08:38 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:37.660 15:08:38 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:37.660 15:08:38 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:37.660 ************************************ 00:15:37.660 START TEST xnvme_fio_plugin 00:15:37.660 ************************************ 00:15:37.660 15:08:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:37.660 15:08:38 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:37.660 15:08:38 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:15:37.660 15:08:38 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:37.660 15:08:38 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:37.660 15:08:38 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:37.660 15:08:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:37.660 15:08:38 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:37.660 15:08:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:37.660 15:08:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:37.660 15:08:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:37.660 15:08:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:37.660 15:08:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:37.660 15:08:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:37.660 15:08:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:37.660 15:08:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:37.660 15:08:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:37.660 15:08:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:37.660 15:08:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:37.660 { 00:15:37.660 "subsystems": [ 00:15:37.660 { 00:15:37.660 "subsystem": "bdev", 00:15:37.660 "config": [ 00:15:37.660 { 00:15:37.660 "params": { 00:15:37.660 "io_mechanism": "io_uring", 00:15:37.660 "conserve_cpu": false, 00:15:37.660 "filename": "/dev/nvme0n1", 00:15:37.660 "name": "xnvme_bdev" 00:15:37.660 }, 00:15:37.660 "method": "bdev_xnvme_create" 00:15:37.660 }, 00:15:37.660 { 00:15:37.660 "method": "bdev_wait_for_examine" 00:15:37.660 } 00:15:37.660 ] 00:15:37.660 } 00:15:37.660 ] 00:15:37.660 } 00:15:37.660 15:08:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:37.660 15:08:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:37.660 15:08:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:37.660 15:08:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:37.660 15:08:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:37.917 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:37.917 fio-3.35 00:15:37.917 Starting 1 thread 00:15:44.483 00:15:44.483 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71818: Wed Nov 20 15:08:44 2024 00:15:44.483 read: IOPS=29.6k, BW=116MiB/s (121MB/s)(578MiB/5001msec) 00:15:44.483 slat (nsec): min=2500, max=71127, avg=6475.22, stdev=2797.93 00:15:44.483 clat (usec): min=980, max=6045, avg=1907.28, stdev=323.91 00:15:44.483 lat (usec): min=983, max=6092, avg=1913.76, stdev=325.10 00:15:44.483 clat percentiles (usec): 00:15:44.483 | 1.00th=[ 1188], 5.00th=[ 1483], 10.00th=[ 1565], 20.00th=[ 1663], 00:15:44.483 | 30.00th=[ 1729], 40.00th=[ 1795], 50.00th=[ 1860], 60.00th=[ 1942], 00:15:44.483 | 70.00th=[ 2040], 80.00th=[ 2147], 90.00th=[ 2311], 95.00th=[ 2474], 00:15:44.483 | 99.00th=[ 2802], 99.50th=[ 2933], 99.90th=[ 3294], 99.95th=[ 4555], 00:15:44.483 | 99.99th=[ 5866] 00:15:44.483 bw ( KiB/s): min=101888, max=136320, per=98.73%, avg=116848.00, stdev=12678.41, samples=9 00:15:44.483 iops : min=25472, max=34084, avg=29212.22, stdev=3170.17, samples=9 00:15:44.483 lat (usec) : 1000=0.01% 00:15:44.483 lat (msec) : 2=66.18%, 4=33.72%, 10=0.09% 00:15:44.483 cpu : usr=34.74%, sys=64.02%, ctx=11, majf=0, minf=762 00:15:44.483 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:44.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:44.483 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:15:44.483 issued rwts: total=147968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:44.483 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:44.483 00:15:44.483 Run status group 0 (all jobs): 00:15:44.483 READ: bw=116MiB/s (121MB/s), 116MiB/s-116MiB/s (121MB/s-121MB/s), io=578MiB (606MB), run=5001-5001msec 00:15:45.051 ----------------------------------------------------- 00:15:45.051 Suppressions used: 00:15:45.051 count bytes template 00:15:45.051 1 11 /usr/src/fio/parse.c 00:15:45.051 1 8 libtcmalloc_minimal.so 00:15:45.051 1 904 libcrypto.so 00:15:45.051 ----------------------------------------------------- 00:15:45.051 00:15:45.051 15:08:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:45.051 15:08:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:45.051 15:08:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:45.051 15:08:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:45.051 15:08:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:45.051 15:08:45 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:45.051 15:08:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:45.051 15:08:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:45.051 15:08:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:45.051 15:08:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:45.051 15:08:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:45.051 15:08:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:45.051 15:08:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:45.051 15:08:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:45.051 15:08:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:45.051 15:08:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:45.051 15:08:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:45.051 15:08:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:45.051 15:08:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:45.051 15:08:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:45.051 15:08:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:45.051 { 00:15:45.051 "subsystems": [ 00:15:45.051 { 00:15:45.051 "subsystem": "bdev", 00:15:45.051 "config": [ 00:15:45.051 { 00:15:45.051 "params": { 00:15:45.051 "io_mechanism": "io_uring", 00:15:45.051 "conserve_cpu": false, 00:15:45.051 "filename": "/dev/nvme0n1", 00:15:45.051 "name": "xnvme_bdev" 00:15:45.051 }, 00:15:45.051 "method": "bdev_xnvme_create" 00:15:45.051 }, 00:15:45.051 { 00:15:45.051 "method": "bdev_wait_for_examine" 00:15:45.051 } 00:15:45.051 ] 00:15:45.051 } 00:15:45.051 ] 00:15:45.051 } 00:15:45.310 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:45.310 fio-3.35 00:15:45.310 Starting 1 thread 00:15:51.903 00:15:51.903 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71921: Wed Nov 20 15:08:51 2024 00:15:51.903 write: IOPS=34.3k, BW=134MiB/s (140MB/s)(670MiB/5002msec); 0 zone resets 00:15:51.903 slat (usec): min=2, max=171, avg= 4.96, stdev= 1.84 00:15:51.903 clat (usec): min=999, max=3487, avg=1668.77, stdev=257.59 00:15:51.903 lat (usec): min=1001, max=3494, avg=1673.73, stdev=258.35 00:15:51.903 clat percentiles (usec): 00:15:51.903 | 1.00th=[ 1254], 5.00th=[ 1352], 10.00th=[ 1401], 20.00th=[ 1467], 00:15:51.903 | 30.00th=[ 1516], 40.00th=[ 1565], 50.00th=[ 1614], 60.00th=[ 1680], 00:15:51.903 | 70.00th=[ 1745], 80.00th=[ 1860], 90.00th=[ 2008], 95.00th=[ 2147], 00:15:51.903 | 99.00th=[ 2474], 99.50th=[ 2606], 99.90th=[ 3032], 99.95th=[ 3163], 00:15:51.903 | 99.99th=[ 3392] 00:15:51.903 bw ( KiB/s): min=118784, max=147968, per=100.00%, avg=137187.78, stdev=12191.71, samples=9 00:15:51.903 iops : min=29696, max=36992, avg=34296.89, stdev=3047.98, samples=9 00:15:51.903 lat (usec) : 1000=0.01% 00:15:51.903 lat (msec) : 2=89.39%, 4=10.61% 00:15:51.903 cpu : usr=30.75%, sys=68.25%, ctx=8, majf=0, minf=763 00:15:51.903 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:51.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:51.903 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:15:51.903 issued rwts: total=0,171456,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:51.903 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:51.903 00:15:51.903 Run status group 0 (all jobs): 00:15:51.903 WRITE: bw=134MiB/s (140MB/s), 134MiB/s-134MiB/s (140MB/s-140MB/s), io=670MiB (702MB), run=5002-5002msec 00:15:52.841 ----------------------------------------------------- 00:15:52.841 Suppressions used: 00:15:52.841 count bytes template 00:15:52.841 1 11 /usr/src/fio/parse.c 00:15:52.841 1 8 libtcmalloc_minimal.so 00:15:52.841 1 904 libcrypto.so 00:15:52.841 ----------------------------------------------------- 00:15:52.841 00:15:52.841 ************************************ 00:15:52.841 END TEST xnvme_fio_plugin 00:15:52.841 ************************************ 00:15:52.841 00:15:52.841 real 0m15.187s 00:15:52.841 user 0m7.299s 00:15:52.841 sys 0m7.496s 00:15:52.841 15:08:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:52.841 15:08:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:52.841 15:08:53 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:15:52.841 15:08:53 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:15:52.841 15:08:53 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:15:52.841 15:08:53 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:15:52.841 15:08:53 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:52.841 15:08:53 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:52.841 15:08:53 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:52.841 ************************************ 00:15:52.841 START TEST xnvme_rpc 00:15:52.841 ************************************ 00:15:52.841 15:08:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:15:52.841 15:08:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:15:52.841 15:08:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:15:52.841 15:08:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:15:52.841 15:08:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:15:52.841 15:08:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72023 00:15:52.841 15:08:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:52.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.841 15:08:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72023 00:15:52.841 15:08:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72023 ']' 00:15:52.841 15:08:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.841 15:08:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:52.841 15:08:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.841 15:08:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:52.841 15:08:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:52.841 [2024-11-20 15:08:53.624874] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:15:52.841 [2024-11-20 15:08:53.625287] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72023 ] 00:15:53.100 [2024-11-20 15:08:53.815902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.359 [2024-11-20 15:08:53.965340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.304 15:08:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:54.304 15:08:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:54.304 15:08:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:15:54.304 15:08:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.304 15:08:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.304 xnvme_bdev 00:15:54.304 15:08:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.304 15:08:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:15:54.304 15:08:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:15:54.304 15:08:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:54.304 15:08:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.304 15:08:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.304 15:08:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.304 15:08:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:15:54.304 15:08:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:15:54.304 15:08:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:54.304 15:08:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.304 15:08:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:15:54.304 15:08:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.304 15:08:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.304 15:08:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:15:54.304 15:08:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:15:54.304 15:08:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:54.304 15:08:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:15:54.304 15:08:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.304 15:08:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.564 15:08:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.564 15:08:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:15:54.564 15:08:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:15:54.564 15:08:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:54.564 15:08:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.564 15:08:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:15:54.564 15:08:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.564 15:08:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.564 15:08:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:15:54.564 15:08:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:15:54.564 15:08:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.564 15:08:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.564 15:08:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.564 15:08:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72023 00:15:54.564 15:08:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72023 ']' 00:15:54.564 15:08:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72023 00:15:54.564 15:08:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:15:54.564 15:08:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:54.564 15:08:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72023 00:15:54.564 killing process with pid 72023 00:15:54.564 15:08:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:54.564 15:08:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:54.564 15:08:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72023' 00:15:54.564 15:08:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72023 00:15:54.564 15:08:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72023 00:15:57.099 ************************************ 00:15:57.099 END TEST xnvme_rpc 00:15:57.099 ************************************ 00:15:57.099 00:15:57.099 real 0m4.434s 00:15:57.099 user 0m4.311s 00:15:57.099 sys 0m0.761s 00:15:57.099 15:08:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:57.099 15:08:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.358 15:08:57 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:57.358 15:08:57 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:57.358 15:08:57 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:57.358 15:08:57 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:57.358 ************************************ 00:15:57.358 START TEST xnvme_bdevperf 00:15:57.358 ************************************ 00:15:57.358 15:08:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:15:57.358 15:08:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:15:57.358 15:08:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:15:57.358 15:08:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:57.358 15:08:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:15:57.358 15:08:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:57.358 15:08:58 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:57.358 15:08:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:57.358 { 00:15:57.358 "subsystems": [ 00:15:57.358 { 00:15:57.358 "subsystem": "bdev", 00:15:57.358 "config": [ 00:15:57.358 { 00:15:57.358 "params": { 00:15:57.358 "io_mechanism": "io_uring", 00:15:57.358 "conserve_cpu": true, 00:15:57.358 "filename": "/dev/nvme0n1", 00:15:57.358 "name": "xnvme_bdev" 00:15:57.358 }, 00:15:57.358 "method": "bdev_xnvme_create" 00:15:57.358 }, 00:15:57.358 { 00:15:57.358 "method": "bdev_wait_for_examine" 00:15:57.358 } 00:15:57.358 ] 00:15:57.358 } 00:15:57.358 ] 00:15:57.358 } 00:15:57.358 [2024-11-20 15:08:58.119361] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:15:57.358 [2024-11-20 15:08:58.119664] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72120 ] 00:15:57.617 [2024-11-20 15:08:58.309707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.876 [2024-11-20 15:08:58.457697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.134 Running I/O for 5 seconds... 00:16:00.447 39296.00 IOPS, 153.50 MiB/s [2024-11-20T15:09:02.221Z] 38784.00 IOPS, 151.50 MiB/s [2024-11-20T15:09:03.158Z] 37674.67 IOPS, 147.17 MiB/s [2024-11-20T15:09:04.093Z] 36480.00 IOPS, 142.50 MiB/s [2024-11-20T15:09:04.093Z] 36070.40 IOPS, 140.90 MiB/s 00:16:03.257 Latency(us) 00:16:03.257 [2024-11-20T15:09:04.093Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:03.257 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:16:03.257 xnvme_bdev : 5.01 36037.89 140.77 0.00 0.00 1771.22 960.67 6132.49 00:16:03.257 [2024-11-20T15:09:04.093Z] =================================================================================================================== 00:16:03.257 [2024-11-20T15:09:04.093Z] Total : 36037.89 140.77 0.00 0.00 1771.22 960.67 6132.49 00:16:04.663 15:09:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:04.664 15:09:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:16:04.664 15:09:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:04.664 15:09:05 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:04.664 15:09:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:04.664 { 00:16:04.664 "subsystems": [ 00:16:04.664 { 00:16:04.664 "subsystem": "bdev", 00:16:04.664 "config": [ 00:16:04.664 { 00:16:04.664 "params": { 00:16:04.664 "io_mechanism": "io_uring", 00:16:04.664 "conserve_cpu": true, 00:16:04.664 "filename": "/dev/nvme0n1", 00:16:04.664 "name": "xnvme_bdev" 00:16:04.664 }, 00:16:04.664 "method": "bdev_xnvme_create" 00:16:04.664 }, 00:16:04.664 { 00:16:04.664 "method": "bdev_wait_for_examine" 00:16:04.664 } 00:16:04.664 ] 00:16:04.664 } 00:16:04.664 ] 00:16:04.664 } 00:16:04.664 [2024-11-20 15:09:05.281488] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:16:04.664 [2024-11-20 15:09:05.281672] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72201 ] 00:16:04.664 [2024-11-20 15:09:05.479440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:04.922 [2024-11-20 15:09:05.628716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:05.490 Running I/O for 5 seconds... 00:16:07.369 31616.00 IOPS, 123.50 MiB/s [2024-11-20T15:09:09.151Z] 32064.00 IOPS, 125.25 MiB/s [2024-11-20T15:09:10.085Z] 32789.33 IOPS, 128.08 MiB/s [2024-11-20T15:09:11.461Z] 33504.00 IOPS, 130.88 MiB/s 00:16:10.625 Latency(us) 00:16:10.625 [2024-11-20T15:09:11.461Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:10.625 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:16:10.625 xnvme_bdev : 5.00 34054.81 133.03 0.00 0.00 1873.88 986.99 7316.87 00:16:10.625 [2024-11-20T15:09:11.461Z] =================================================================================================================== 00:16:10.625 [2024-11-20T15:09:11.462Z] Total : 34054.81 133.03 0.00 0.00 1873.88 986.99 7316.87 00:16:11.561 00:16:11.561 real 0m14.285s 00:16:11.561 user 0m8.181s 00:16:11.561 sys 0m5.640s 00:16:11.561 15:09:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:11.561 15:09:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:11.561 ************************************ 00:16:11.561 END TEST xnvme_bdevperf 00:16:11.561 ************************************ 00:16:11.561 15:09:12 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:16:11.561 15:09:12 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:11.561 15:09:12 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:11.561 15:09:12 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:11.562 ************************************ 00:16:11.562 START TEST xnvme_fio_plugin 00:16:11.562 ************************************ 00:16:11.562 15:09:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:16:11.562 15:09:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:16:11.562 15:09:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:16:11.562 15:09:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:11.562 15:09:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:11.562 15:09:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:11.562 15:09:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:11.562 15:09:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:11.562 15:09:12 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:11.562 15:09:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:11.562 15:09:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:11.562 15:09:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:11.562 15:09:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:11.562 15:09:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:11.562 15:09:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:11.562 15:09:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:11.562 15:09:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:11.562 15:09:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:11.562 15:09:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:11.820 15:09:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:11.820 15:09:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:11.820 { 00:16:11.820 "subsystems": [ 00:16:11.820 { 00:16:11.820 "subsystem": "bdev", 00:16:11.820 "config": [ 00:16:11.820 { 00:16:11.820 "params": { 00:16:11.820 "io_mechanism": "io_uring", 00:16:11.820 "conserve_cpu": true, 00:16:11.820 "filename": "/dev/nvme0n1", 00:16:11.820 "name": "xnvme_bdev" 00:16:11.820 }, 00:16:11.820 "method": "bdev_xnvme_create" 00:16:11.820 }, 00:16:11.820 { 00:16:11.820 "method": "bdev_wait_for_examine" 00:16:11.820 } 00:16:11.820 ] 00:16:11.820 } 00:16:11.820 ] 00:16:11.820 } 00:16:11.820 15:09:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:11.820 15:09:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:11.820 15:09:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:11.820 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:11.820 fio-3.35 00:16:11.820 Starting 1 thread 00:16:18.385 00:16:18.385 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72326: Wed Nov 20 15:09:18 2024 00:16:18.385 read: IOPS=34.9k, BW=136MiB/s (143MB/s)(682MiB/5001msec) 00:16:18.385 slat (usec): min=2, max=136, avg= 4.79, stdev= 2.14 00:16:18.385 clat (usec): min=720, max=4515, avg=1643.99, stdev=378.64 00:16:18.385 lat (usec): min=725, max=4521, avg=1648.78, stdev=379.94 00:16:18.385 clat percentiles (usec): 00:16:18.385 | 1.00th=[ 1106], 5.00th=[ 1205], 10.00th=[ 1254], 20.00th=[ 1336], 00:16:18.385 | 30.00th=[ 1401], 40.00th=[ 1467], 50.00th=[ 1549], 60.00th=[ 1647], 00:16:18.385 | 70.00th=[ 1778], 80.00th=[ 1942], 90.00th=[ 2180], 95.00th=[ 2376], 00:16:18.385 | 99.00th=[ 2737], 99.50th=[ 2933], 99.90th=[ 3326], 99.95th=[ 3556], 00:16:18.385 | 99.99th=[ 4424] 00:16:18.385 bw ( KiB/s): min=110592, max=167936, per=99.11%, avg=138324.11, stdev=19868.81, samples=9 00:16:18.385 iops : min=27648, max=41984, avg=34581.00, stdev=4967.21, samples=9 00:16:18.385 lat (usec) : 750=0.01%, 1000=0.04% 00:16:18.385 lat (msec) : 2=82.76%, 4=17.16%, 10=0.04% 00:16:18.385 cpu : usr=46.98%, sys=49.50%, ctx=13, majf=0, minf=762 00:16:18.385 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:16:18.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:18.385 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:16:18.385 issued rwts: total=174496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:18.385 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:18.385 00:16:18.385 Run status group 0 (all jobs): 00:16:18.385 READ: bw=136MiB/s (143MB/s), 136MiB/s-136MiB/s (143MB/s-143MB/s), io=682MiB (715MB), run=5001-5001msec 00:16:19.318 ----------------------------------------------------- 00:16:19.318 Suppressions used: 00:16:19.318 count bytes template 00:16:19.318 1 11 /usr/src/fio/parse.c 00:16:19.318 1 8 libtcmalloc_minimal.so 00:16:19.318 1 904 libcrypto.so 00:16:19.318 ----------------------------------------------------- 00:16:19.318 00:16:19.318 15:09:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:19.318 15:09:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:19.318 15:09:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:19.318 15:09:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:19.318 15:09:19 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:19.318 15:09:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:19.318 15:09:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:19.318 15:09:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:19.318 15:09:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:19.318 15:09:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:19.318 15:09:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:19.318 15:09:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:19.318 15:09:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:19.318 15:09:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:19.319 15:09:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:19.319 15:09:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:19.319 15:09:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:19.319 15:09:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:19.319 15:09:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:19.319 15:09:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:19.319 15:09:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:19.319 { 00:16:19.319 "subsystems": [ 00:16:19.319 { 00:16:19.319 "subsystem": "bdev", 00:16:19.319 "config": [ 00:16:19.319 { 00:16:19.319 "params": { 00:16:19.319 "io_mechanism": "io_uring", 00:16:19.319 "conserve_cpu": true, 00:16:19.319 "filename": "/dev/nvme0n1", 00:16:19.319 "name": "xnvme_bdev" 00:16:19.319 }, 00:16:19.319 "method": "bdev_xnvme_create" 00:16:19.319 }, 00:16:19.319 { 00:16:19.319 "method": "bdev_wait_for_examine" 00:16:19.319 } 00:16:19.319 ] 00:16:19.319 } 00:16:19.319 ] 00:16:19.319 } 00:16:19.578 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:19.578 fio-3.35 00:16:19.578 Starting 1 thread 00:16:26.152 00:16:26.152 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72433: Wed Nov 20 15:09:26 2024 00:16:26.152 write: IOPS=26.9k, BW=105MiB/s (110MB/s)(525MiB/5002msec); 0 zone resets 00:16:26.152 slat (usec): min=2, max=265, avg= 6.89, stdev= 3.12 00:16:26.152 clat (usec): min=460, max=18216, avg=2113.51, stdev=663.35 00:16:26.152 lat (usec): min=466, max=18224, avg=2120.40, stdev=664.46 00:16:26.152 clat percentiles (usec): 00:16:26.152 | 1.00th=[ 1254], 5.00th=[ 1401], 10.00th=[ 1500], 20.00th=[ 1663], 00:16:26.152 | 30.00th=[ 1795], 40.00th=[ 1926], 50.00th=[ 2114], 60.00th=[ 2245], 00:16:26.152 | 70.00th=[ 2376], 80.00th=[ 2507], 90.00th=[ 2671], 95.00th=[ 2802], 00:16:26.152 | 99.00th=[ 3064], 99.50th=[ 3261], 99.90th=[13829], 99.95th=[16057], 00:16:26.152 | 99.99th=[18220] 00:16:26.152 bw ( KiB/s): min=93696, max=129024, per=99.24%, avg=106658.67, stdev=12589.08, samples=9 00:16:26.152 iops : min=23424, max=32256, avg=26664.67, stdev=3147.27, samples=9 00:16:26.152 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:16:26.152 lat (msec) : 2=44.34%, 4=55.51%, 20=0.14% 00:16:26.152 cpu : usr=46.35%, sys=49.95%, ctx=16, majf=0, minf=763 00:16:26.152 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:16:26.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.152 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:16:26.152 issued rwts: total=0,134391,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.152 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:26.152 00:16:26.152 Run status group 0 (all jobs): 00:16:26.152 WRITE: bw=105MiB/s (110MB/s), 105MiB/s-105MiB/s (110MB/s-110MB/s), io=525MiB (550MB), run=5002-5002msec 00:16:26.720 ----------------------------------------------------- 00:16:26.720 Suppressions used: 00:16:26.720 count bytes template 00:16:26.720 1 11 /usr/src/fio/parse.c 00:16:26.720 1 8 libtcmalloc_minimal.so 00:16:26.720 1 904 libcrypto.so 00:16:26.720 ----------------------------------------------------- 00:16:26.720 00:16:26.978 ************************************ 00:16:26.978 END TEST xnvme_fio_plugin 00:16:26.978 ************************************ 00:16:26.978 00:16:26.978 real 0m15.213s 00:16:26.978 user 0m8.754s 00:16:26.978 sys 0m5.841s 00:16:26.978 15:09:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:26.978 15:09:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:26.978 15:09:27 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:16:26.978 15:09:27 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:16:26.978 15:09:27 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:16:26.978 15:09:27 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:16:26.978 15:09:27 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:16:26.978 15:09:27 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:16:26.978 15:09:27 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:16:26.978 15:09:27 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:16:26.978 15:09:27 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:16:26.979 15:09:27 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:26.979 15:09:27 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:26.979 15:09:27 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:26.979 ************************************ 00:16:26.979 START TEST xnvme_rpc 00:16:26.979 ************************************ 00:16:26.979 15:09:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:16:26.979 15:09:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:16:26.979 15:09:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:16:26.979 15:09:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:16:26.979 15:09:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:16:26.979 15:09:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72515 00:16:26.979 15:09:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:26.979 15:09:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72515 00:16:26.979 15:09:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72515 ']' 00:16:26.979 15:09:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:26.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:26.979 15:09:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:26.979 15:09:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:26.979 15:09:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:26.979 15:09:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:26.979 [2024-11-20 15:09:27.768397] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:16:26.979 [2024-11-20 15:09:27.768694] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72515 ] 00:16:27.237 [2024-11-20 15:09:27.956818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.495 [2024-11-20 15:09:28.100394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.432 15:09:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:28.433 15:09:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:28.433 15:09:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:16:28.433 15:09:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.433 15:09:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.433 xnvme_bdev 00:16:28.433 15:09:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.433 15:09:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:16:28.433 15:09:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:28.433 15:09:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:16:28.433 15:09:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.433 15:09:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.433 15:09:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.433 15:09:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:16:28.433 15:09:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:16:28.433 15:09:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:16:28.433 15:09:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:28.433 15:09:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.433 15:09:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.433 15:09:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.433 15:09:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:16:28.433 15:09:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:16:28.433 15:09:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:28.433 15:09:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.433 15:09:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.433 15:09:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:16:28.692 15:09:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.692 15:09:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:16:28.692 15:09:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:16:28.692 15:09:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:16:28.692 15:09:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:28.692 15:09:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.692 15:09:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.692 15:09:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.692 15:09:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:16:28.692 15:09:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:16:28.692 15:09:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.692 15:09:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.692 15:09:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.692 15:09:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72515 00:16:28.692 15:09:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72515 ']' 00:16:28.692 15:09:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72515 00:16:28.692 15:09:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:16:28.692 15:09:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:28.692 15:09:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72515 00:16:28.692 15:09:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:28.692 15:09:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:28.692 killing process with pid 72515 00:16:28.692 15:09:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72515' 00:16:28.692 15:09:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72515 00:16:28.692 15:09:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72515 00:16:31.226 00:16:31.226 real 0m4.405s 00:16:31.226 user 0m4.292s 00:16:31.226 sys 0m0.754s 00:16:31.226 15:09:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:31.226 15:09:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.226 ************************************ 00:16:31.226 END TEST xnvme_rpc 00:16:31.226 ************************************ 00:16:31.485 15:09:32 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:16:31.485 15:09:32 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:31.485 15:09:32 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:31.485 15:09:32 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:31.485 ************************************ 00:16:31.485 START TEST xnvme_bdevperf 00:16:31.485 ************************************ 00:16:31.485 15:09:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:16:31.485 15:09:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:16:31.485 15:09:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:16:31.485 15:09:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:31.485 15:09:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:16:31.485 15:09:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:31.485 15:09:32 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:31.485 15:09:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:31.485 { 00:16:31.485 "subsystems": [ 00:16:31.485 { 00:16:31.485 "subsystem": "bdev", 00:16:31.485 "config": [ 00:16:31.485 { 00:16:31.485 "params": { 00:16:31.485 "io_mechanism": "io_uring_cmd", 00:16:31.485 "conserve_cpu": false, 00:16:31.485 "filename": "/dev/ng0n1", 00:16:31.485 "name": "xnvme_bdev" 00:16:31.485 }, 00:16:31.485 "method": "bdev_xnvme_create" 00:16:31.485 }, 00:16:31.485 { 00:16:31.485 "method": "bdev_wait_for_examine" 00:16:31.485 } 00:16:31.485 ] 00:16:31.485 } 00:16:31.485 ] 00:16:31.485 } 00:16:31.485 [2024-11-20 15:09:32.229792] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:16:31.485 [2024-11-20 15:09:32.230088] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72600 ] 00:16:31.744 [2024-11-20 15:09:32.416642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.744 [2024-11-20 15:09:32.559191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:32.312 Running I/O for 5 seconds... 00:16:34.187 31232.00 IOPS, 122.00 MiB/s [2024-11-20T15:09:36.403Z] 29984.00 IOPS, 117.12 MiB/s [2024-11-20T15:09:37.341Z] 30058.67 IOPS, 117.42 MiB/s [2024-11-20T15:09:38.286Z] 29824.00 IOPS, 116.50 MiB/s 00:16:37.450 Latency(us) 00:16:37.450 [2024-11-20T15:09:38.286Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:37.450 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:16:37.450 xnvme_bdev : 5.00 29481.23 115.16 0.00 0.00 2164.29 1230.44 7843.26 00:16:37.450 [2024-11-20T15:09:38.286Z] =================================================================================================================== 00:16:37.450 [2024-11-20T15:09:38.286Z] Total : 29481.23 115.16 0.00 0.00 2164.29 1230.44 7843.26 00:16:38.401 15:09:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:38.401 15:09:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:38.401 15:09:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:16:38.401 15:09:39 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:38.401 15:09:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:38.709 { 00:16:38.709 "subsystems": [ 00:16:38.709 { 00:16:38.709 "subsystem": "bdev", 00:16:38.709 "config": [ 00:16:38.709 { 00:16:38.709 "params": { 00:16:38.709 "io_mechanism": "io_uring_cmd", 00:16:38.709 "conserve_cpu": false, 00:16:38.709 "filename": "/dev/ng0n1", 00:16:38.709 "name": "xnvme_bdev" 00:16:38.709 }, 00:16:38.709 "method": "bdev_xnvme_create" 00:16:38.709 }, 00:16:38.709 { 00:16:38.709 "method": "bdev_wait_for_examine" 00:16:38.709 } 00:16:38.709 ] 00:16:38.709 } 00:16:38.709 ] 00:16:38.709 } 00:16:38.709 [2024-11-20 15:09:39.330834] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:16:38.709 [2024-11-20 15:09:39.330980] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72686 ] 00:16:38.709 [2024-11-20 15:09:39.520236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:38.968 [2024-11-20 15:09:39.664582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:39.536 Running I/O for 5 seconds... 00:16:41.409 27520.00 IOPS, 107.50 MiB/s [2024-11-20T15:09:43.179Z] 27104.00 IOPS, 105.88 MiB/s [2024-11-20T15:09:44.116Z] 28522.67 IOPS, 111.42 MiB/s [2024-11-20T15:09:45.080Z] 29104.00 IOPS, 113.69 MiB/s 00:16:44.244 Latency(us) 00:16:44.244 [2024-11-20T15:09:45.080Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:44.244 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:16:44.244 xnvme_bdev : 5.00 29901.72 116.80 0.00 0.00 2133.66 1118.59 6632.56 00:16:44.244 [2024-11-20T15:09:45.080Z] =================================================================================================================== 00:16:44.244 [2024-11-20T15:09:45.080Z] Total : 29901.72 116.80 0.00 0.00 2133.66 1118.59 6632.56 00:16:45.624 15:09:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:45.624 15:09:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:16:45.624 15:09:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:45.624 15:09:46 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:45.624 15:09:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:45.624 { 00:16:45.624 "subsystems": [ 00:16:45.624 { 00:16:45.624 "subsystem": "bdev", 00:16:45.624 "config": [ 00:16:45.624 { 00:16:45.624 "params": { 00:16:45.624 "io_mechanism": "io_uring_cmd", 00:16:45.624 "conserve_cpu": false, 00:16:45.624 "filename": "/dev/ng0n1", 00:16:45.624 "name": "xnvme_bdev" 00:16:45.624 }, 00:16:45.624 "method": "bdev_xnvme_create" 00:16:45.624 }, 00:16:45.624 { 00:16:45.624 "method": "bdev_wait_for_examine" 00:16:45.624 } 00:16:45.624 ] 00:16:45.624 } 00:16:45.624 ] 00:16:45.624 } 00:16:45.624 [2024-11-20 15:09:46.419610] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:16:45.624 [2024-11-20 15:09:46.419779] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72767 ] 00:16:45.883 [2024-11-20 15:09:46.609246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.142 [2024-11-20 15:09:46.755680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.401 Running I/O for 5 seconds... 00:16:48.350 70592.00 IOPS, 275.75 MiB/s [2024-11-20T15:09:50.563Z] 70400.00 IOPS, 275.00 MiB/s [2024-11-20T15:09:51.502Z] 70464.00 IOPS, 275.25 MiB/s [2024-11-20T15:09:52.460Z] 69904.00 IOPS, 273.06 MiB/s [2024-11-20T15:09:52.460Z] 70169.60 IOPS, 274.10 MiB/s 00:16:51.624 Latency(us) 00:16:51.624 [2024-11-20T15:09:52.460Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:51.624 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:16:51.624 xnvme_bdev : 5.00 70158.08 274.06 0.00 0.00 909.43 569.16 2421.41 00:16:51.624 [2024-11-20T15:09:52.460Z] =================================================================================================================== 00:16:51.624 [2024-11-20T15:09:52.460Z] Total : 70158.08 274.06 0.00 0.00 909.43 569.16 2421.41 00:16:53.005 15:09:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:53.005 15:09:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:16:53.005 15:09:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:53.005 15:09:53 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:53.005 15:09:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:53.005 { 00:16:53.005 "subsystems": [ 00:16:53.005 { 00:16:53.005 "subsystem": "bdev", 00:16:53.005 "config": [ 00:16:53.005 { 00:16:53.005 "params": { 00:16:53.005 "io_mechanism": "io_uring_cmd", 00:16:53.005 "conserve_cpu": false, 00:16:53.005 "filename": "/dev/ng0n1", 00:16:53.005 "name": "xnvme_bdev" 00:16:53.005 }, 00:16:53.005 "method": "bdev_xnvme_create" 00:16:53.005 }, 00:16:53.005 { 00:16:53.005 "method": "bdev_wait_for_examine" 00:16:53.005 } 00:16:53.005 ] 00:16:53.005 } 00:16:53.005 ] 00:16:53.005 } 00:16:53.005 [2024-11-20 15:09:53.509814] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:16:53.005 [2024-11-20 15:09:53.509958] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72847 ] 00:16:53.005 [2024-11-20 15:09:53.695476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.264 [2024-11-20 15:09:53.840577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.522 Running I/O for 5 seconds... 00:16:55.831 67963.00 IOPS, 265.48 MiB/s [2024-11-20T15:09:57.601Z] 59108.00 IOPS, 230.89 MiB/s [2024-11-20T15:09:58.555Z] 54821.67 IOPS, 214.15 MiB/s [2024-11-20T15:09:59.490Z] 53041.75 IOPS, 207.19 MiB/s [2024-11-20T15:09:59.490Z] 51713.40 IOPS, 202.01 MiB/s 00:16:58.654 Latency(us) 00:16:58.654 [2024-11-20T15:09:59.490Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:58.654 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:16:58.654 xnvme_bdev : 5.00 51686.08 201.90 0.00 0.00 1234.27 64.57 10896.35 00:16:58.654 [2024-11-20T15:09:59.490Z] =================================================================================================================== 00:16:58.654 [2024-11-20T15:09:59.490Z] Total : 51686.08 201.90 0.00 0.00 1234.27 64.57 10896.35 00:17:00.032 00:17:00.033 real 0m28.382s 00:17:00.033 user 0m14.452s 00:17:00.033 sys 0m13.493s 00:17:00.033 15:10:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:00.033 15:10:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:00.033 ************************************ 00:17:00.033 END TEST xnvme_bdevperf 00:17:00.033 ************************************ 00:17:00.033 15:10:00 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:17:00.033 15:10:00 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:00.033 15:10:00 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:00.033 15:10:00 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:00.033 ************************************ 00:17:00.033 START TEST xnvme_fio_plugin 00:17:00.033 ************************************ 00:17:00.033 15:10:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:17:00.033 15:10:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:17:00.033 15:10:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:17:00.033 15:10:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:00.033 15:10:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:00.033 15:10:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:00.033 15:10:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:00.033 15:10:00 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:00.033 15:10:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:00.033 15:10:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:00.033 15:10:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:00.033 15:10:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:00.033 15:10:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:00.033 15:10:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:00.033 15:10:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:00.033 15:10:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:00.033 15:10:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:00.033 15:10:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:00.033 15:10:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:00.033 15:10:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:00.033 15:10:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:00.033 15:10:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:00.033 15:10:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:00.033 15:10:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:00.033 { 00:17:00.033 "subsystems": [ 00:17:00.033 { 00:17:00.033 "subsystem": "bdev", 00:17:00.033 "config": [ 00:17:00.033 { 00:17:00.033 "params": { 00:17:00.033 "io_mechanism": "io_uring_cmd", 00:17:00.033 "conserve_cpu": false, 00:17:00.033 "filename": "/dev/ng0n1", 00:17:00.033 "name": "xnvme_bdev" 00:17:00.033 }, 00:17:00.033 "method": "bdev_xnvme_create" 00:17:00.033 }, 00:17:00.033 { 00:17:00.033 "method": "bdev_wait_for_examine" 00:17:00.033 } 00:17:00.033 ] 00:17:00.033 } 00:17:00.033 ] 00:17:00.033 } 00:17:00.033 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:00.033 fio-3.35 00:17:00.033 Starting 1 thread 00:17:06.598 00:17:06.598 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72971: Wed Nov 20 15:10:06 2024 00:17:06.598 read: IOPS=31.4k, BW=123MiB/s (128MB/s)(613MiB/5001msec) 00:17:06.598 slat (usec): min=2, max=323, avg= 5.74, stdev= 3.02 00:17:06.598 clat (usec): min=939, max=3195, avg=1810.80, stdev=301.06 00:17:06.598 lat (usec): min=943, max=3214, avg=1816.54, stdev=302.10 00:17:06.598 clat percentiles (usec): 00:17:06.598 | 1.00th=[ 1123], 5.00th=[ 1270], 10.00th=[ 1401], 20.00th=[ 1582], 00:17:06.598 | 30.00th=[ 1680], 40.00th=[ 1745], 50.00th=[ 1811], 60.00th=[ 1893], 00:17:06.598 | 70.00th=[ 1958], 80.00th=[ 2040], 90.00th=[ 2180], 95.00th=[ 2311], 00:17:06.598 | 99.00th=[ 2573], 99.50th=[ 2671], 99.90th=[ 2933], 99.95th=[ 2999], 00:17:06.598 | 99.99th=[ 3097] 00:17:06.598 bw ( KiB/s): min=116502, max=137964, per=100.00%, avg=126350.44, stdev=8643.80, samples=9 00:17:06.598 iops : min=29125, max=34491, avg=31587.56, stdev=2161.02, samples=9 00:17:06.598 lat (usec) : 1000=0.02% 00:17:06.598 lat (msec) : 2=75.50%, 4=24.48% 00:17:06.598 cpu : usr=33.50%, sys=64.78%, ctx=8, majf=0, minf=762 00:17:06.598 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.4%, 16=24.9%, 32=50.2%, >=64=1.6% 00:17:06.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.598 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:17:06.598 issued rwts: total=156864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.598 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:06.598 00:17:06.598 Run status group 0 (all jobs): 00:17:06.598 READ: bw=123MiB/s (128MB/s), 123MiB/s-123MiB/s (128MB/s-128MB/s), io=613MiB (643MB), run=5001-5001msec 00:17:07.533 ----------------------------------------------------- 00:17:07.533 Suppressions used: 00:17:07.533 count bytes template 00:17:07.533 1 11 /usr/src/fio/parse.c 00:17:07.533 1 8 libtcmalloc_minimal.so 00:17:07.533 1 904 libcrypto.so 00:17:07.533 ----------------------------------------------------- 00:17:07.533 00:17:07.533 15:10:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:07.533 15:10:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:07.533 15:10:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:07.534 15:10:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:07.534 15:10:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:07.534 15:10:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:07.534 15:10:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:07.534 15:10:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:07.534 15:10:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:07.534 15:10:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:07.534 15:10:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:07.534 15:10:08 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:07.534 15:10:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:07.534 15:10:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:07.534 15:10:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:07.534 15:10:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:07.534 15:10:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:07.534 15:10:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:07.534 15:10:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:07.534 15:10:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:07.534 15:10:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:07.534 { 00:17:07.534 "subsystems": [ 00:17:07.534 { 00:17:07.534 "subsystem": "bdev", 00:17:07.534 "config": [ 00:17:07.534 { 00:17:07.534 "params": { 00:17:07.534 "io_mechanism": "io_uring_cmd", 00:17:07.534 "conserve_cpu": false, 00:17:07.534 "filename": "/dev/ng0n1", 00:17:07.534 "name": "xnvme_bdev" 00:17:07.534 }, 00:17:07.534 "method": "bdev_xnvme_create" 00:17:07.534 }, 00:17:07.534 { 00:17:07.534 "method": "bdev_wait_for_examine" 00:17:07.534 } 00:17:07.534 ] 00:17:07.534 } 00:17:07.534 ] 00:17:07.534 } 00:17:07.793 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:07.793 fio-3.35 00:17:07.793 Starting 1 thread 00:17:14.358 00:17:14.358 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73067: Wed Nov 20 15:10:14 2024 00:17:14.358 write: IOPS=28.4k, BW=111MiB/s (116MB/s)(555MiB/5001msec); 0 zone resets 00:17:14.358 slat (nsec): min=2480, max=75300, avg=6891.90, stdev=2980.63 00:17:14.358 clat (usec): min=261, max=4252, avg=1985.76, stdev=394.85 00:17:14.358 lat (usec): min=265, max=4258, avg=1992.65, stdev=396.24 00:17:14.358 clat percentiles (usec): 00:17:14.358 | 1.00th=[ 889], 5.00th=[ 1254], 10.00th=[ 1500], 20.00th=[ 1713], 00:17:14.358 | 30.00th=[ 1827], 40.00th=[ 1909], 50.00th=[ 2008], 60.00th=[ 2089], 00:17:14.358 | 70.00th=[ 2180], 80.00th=[ 2278], 90.00th=[ 2442], 95.00th=[ 2606], 00:17:14.358 | 99.00th=[ 2933], 99.50th=[ 3064], 99.90th=[ 3326], 99.95th=[ 3458], 00:17:14.358 | 99.99th=[ 3654] 00:17:14.358 bw ( KiB/s): min=98560, max=137736, per=100.00%, avg=113779.56, stdev=12070.36, samples=9 00:17:14.358 iops : min=24640, max=34434, avg=28444.89, stdev=3017.59, samples=9 00:17:14.358 lat (usec) : 500=0.03%, 750=0.30%, 1000=1.25% 00:17:14.358 lat (msec) : 2=48.37%, 4=50.05%, 10=0.01% 00:17:14.358 cpu : usr=36.74%, sys=62.04%, ctx=7, majf=0, minf=763 00:17:14.358 IO depths : 1=1.5%, 2=3.1%, 4=6.1%, 8=12.2%, 16=24.5%, 32=51.0%, >=64=1.6% 00:17:14.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:14.358 complete : 0=0.0%, 4=98.4%, 8=0.0%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:17:14.358 issued rwts: total=0,141954,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:14.358 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:14.358 00:17:14.358 Run status group 0 (all jobs): 00:17:14.358 WRITE: bw=111MiB/s (116MB/s), 111MiB/s-111MiB/s (116MB/s-116MB/s), io=555MiB (581MB), run=5001-5001msec 00:17:15.294 ----------------------------------------------------- 00:17:15.294 Suppressions used: 00:17:15.294 count bytes template 00:17:15.294 1 11 /usr/src/fio/parse.c 00:17:15.294 1 8 libtcmalloc_minimal.so 00:17:15.294 1 904 libcrypto.so 00:17:15.294 ----------------------------------------------------- 00:17:15.294 00:17:15.294 00:17:15.294 real 0m15.249s 00:17:15.294 user 0m7.560s 00:17:15.294 sys 0m7.287s 00:17:15.294 ************************************ 00:17:15.294 END TEST xnvme_fio_plugin 00:17:15.294 ************************************ 00:17:15.294 15:10:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:15.294 15:10:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:15.294 15:10:15 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:17:15.294 15:10:15 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:17:15.294 15:10:15 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:17:15.294 15:10:15 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:17:15.294 15:10:15 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:15.294 15:10:15 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:15.294 15:10:15 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:15.294 ************************************ 00:17:15.294 START TEST xnvme_rpc 00:17:15.294 ************************************ 00:17:15.294 15:10:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:17:15.294 15:10:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:17:15.294 15:10:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:17:15.294 15:10:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:17:15.294 15:10:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:17:15.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:15.295 15:10:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=73153 00:17:15.295 15:10:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:15.295 15:10:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 73153 00:17:15.295 15:10:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 73153 ']' 00:17:15.295 15:10:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:15.295 15:10:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:15.295 15:10:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:15.295 15:10:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:15.295 15:10:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:15.295 [2024-11-20 15:10:16.032310] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:17:15.295 [2024-11-20 15:10:16.032612] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73153 ] 00:17:15.554 [2024-11-20 15:10:16.220859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.554 [2024-11-20 15:10:16.366279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:16.932 15:10:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:16.932 15:10:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:16.932 15:10:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:17:16.932 15:10:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.932 15:10:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:16.932 xnvme_bdev 00:17:16.932 15:10:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.932 15:10:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:17:16.932 15:10:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:16.932 15:10:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.932 15:10:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:16.932 15:10:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:17:16.932 15:10:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.932 15:10:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:17:16.932 15:10:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:17:16.932 15:10:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:16.932 15:10:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.932 15:10:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:16.932 15:10:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:17:16.932 15:10:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.932 15:10:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:17:16.932 15:10:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:17:16.932 15:10:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:16.932 15:10:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.932 15:10:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:16.932 15:10:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:17:16.932 15:10:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.932 15:10:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:17:16.932 15:10:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:17:16.932 15:10:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:16.932 15:10:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:17:16.932 15:10:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.932 15:10:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:16.932 15:10:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.932 15:10:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:17:16.932 15:10:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:17:16.932 15:10:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.932 15:10:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:16.932 15:10:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.932 15:10:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 73153 00:17:16.932 15:10:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 73153 ']' 00:17:16.932 15:10:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 73153 00:17:16.932 15:10:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:17:16.932 15:10:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:16.932 15:10:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73153 00:17:16.932 killing process with pid 73153 00:17:16.932 15:10:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:16.932 15:10:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:16.932 15:10:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73153' 00:17:16.932 15:10:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 73153 00:17:16.932 15:10:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 73153 00:17:19.467 00:17:19.467 real 0m4.390s 00:17:19.467 user 0m4.305s 00:17:19.467 sys 0m0.725s 00:17:19.467 ************************************ 00:17:19.467 END TEST xnvme_rpc 00:17:19.467 ************************************ 00:17:19.467 15:10:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:19.467 15:10:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.726 15:10:20 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:17:19.726 15:10:20 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:19.726 15:10:20 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:19.726 15:10:20 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:19.726 ************************************ 00:17:19.726 START TEST xnvme_bdevperf 00:17:19.726 ************************************ 00:17:19.726 15:10:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:17:19.726 15:10:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:17:19.726 15:10:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:17:19.726 15:10:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:19.726 15:10:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:17:19.726 15:10:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:19.726 15:10:20 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:19.726 15:10:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:19.726 { 00:17:19.726 "subsystems": [ 00:17:19.726 { 00:17:19.726 "subsystem": "bdev", 00:17:19.726 "config": [ 00:17:19.726 { 00:17:19.726 "params": { 00:17:19.726 "io_mechanism": "io_uring_cmd", 00:17:19.726 "conserve_cpu": true, 00:17:19.726 "filename": "/dev/ng0n1", 00:17:19.726 "name": "xnvme_bdev" 00:17:19.726 }, 00:17:19.726 "method": "bdev_xnvme_create" 00:17:19.726 }, 00:17:19.726 { 00:17:19.726 "method": "bdev_wait_for_examine" 00:17:19.726 } 00:17:19.726 ] 00:17:19.726 } 00:17:19.726 ] 00:17:19.726 } 00:17:19.726 [2024-11-20 15:10:20.478150] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:17:19.727 [2024-11-20 15:10:20.478349] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73238 ] 00:17:19.986 [2024-11-20 15:10:20.692066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.245 [2024-11-20 15:10:20.838965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.504 Running I/O for 5 seconds... 00:17:22.449 42688.00 IOPS, 166.75 MiB/s [2024-11-20T15:10:24.661Z] 37248.00 IOPS, 145.50 MiB/s [2024-11-20T15:10:25.598Z] 35925.33 IOPS, 140.33 MiB/s [2024-11-20T15:10:26.535Z] 35952.00 IOPS, 140.44 MiB/s 00:17:25.700 Latency(us) 00:17:25.700 [2024-11-20T15:10:26.536Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:25.700 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:17:25.700 xnvme_bdev : 5.00 35672.94 139.35 0.00 0.00 1788.84 829.07 8211.74 00:17:25.700 [2024-11-20T15:10:26.536Z] =================================================================================================================== 00:17:25.700 [2024-11-20T15:10:26.536Z] Total : 35672.94 139.35 0.00 0.00 1788.84 829.07 8211.74 00:17:27.076 15:10:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:27.076 15:10:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:27.076 15:10:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:17:27.076 15:10:27 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:27.076 15:10:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:27.076 { 00:17:27.076 "subsystems": [ 00:17:27.076 { 00:17:27.076 "subsystem": "bdev", 00:17:27.076 "config": [ 00:17:27.076 { 00:17:27.076 "params": { 00:17:27.076 "io_mechanism": "io_uring_cmd", 00:17:27.076 "conserve_cpu": true, 00:17:27.076 "filename": "/dev/ng0n1", 00:17:27.076 "name": "xnvme_bdev" 00:17:27.076 }, 00:17:27.076 "method": "bdev_xnvme_create" 00:17:27.076 }, 00:17:27.076 { 00:17:27.076 "method": "bdev_wait_for_examine" 00:17:27.076 } 00:17:27.076 ] 00:17:27.076 } 00:17:27.076 ] 00:17:27.076 } 00:17:27.076 [2024-11-20 15:10:27.611616] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:17:27.076 [2024-11-20 15:10:27.612029] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73318 ] 00:17:27.076 [2024-11-20 15:10:27.829530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.335 [2024-11-20 15:10:27.972681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:27.594 Running I/O for 5 seconds... 00:17:29.909 33791.00 IOPS, 132.00 MiB/s [2024-11-20T15:10:31.680Z] 32543.50 IOPS, 127.12 MiB/s [2024-11-20T15:10:32.617Z] 32149.00 IOPS, 125.58 MiB/s [2024-11-20T15:10:33.554Z] 32991.75 IOPS, 128.87 MiB/s 00:17:32.718 Latency(us) 00:17:32.718 [2024-11-20T15:10:33.554Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.718 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:17:32.718 xnvme_bdev : 5.00 33640.37 131.41 0.00 0.00 1896.55 628.38 7790.62 00:17:32.718 [2024-11-20T15:10:33.554Z] =================================================================================================================== 00:17:32.718 [2024-11-20T15:10:33.554Z] Total : 33640.37 131.41 0.00 0.00 1896.55 628.38 7790.62 00:17:34.094 15:10:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:34.094 15:10:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:17:34.094 15:10:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:34.094 15:10:34 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:34.094 15:10:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:34.094 { 00:17:34.094 "subsystems": [ 00:17:34.094 { 00:17:34.094 "subsystem": "bdev", 00:17:34.094 "config": [ 00:17:34.094 { 00:17:34.094 "params": { 00:17:34.094 "io_mechanism": "io_uring_cmd", 00:17:34.094 "conserve_cpu": true, 00:17:34.094 "filename": "/dev/ng0n1", 00:17:34.094 "name": "xnvme_bdev" 00:17:34.094 }, 00:17:34.094 "method": "bdev_xnvme_create" 00:17:34.094 }, 00:17:34.094 { 00:17:34.094 "method": "bdev_wait_for_examine" 00:17:34.094 } 00:17:34.094 ] 00:17:34.094 } 00:17:34.094 ] 00:17:34.094 } 00:17:34.094 [2024-11-20 15:10:34.795249] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:17:34.094 [2024-11-20 15:10:34.795403] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73403 ] 00:17:34.430 [2024-11-20 15:10:34.980840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.430 [2024-11-20 15:10:35.137083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.997 Running I/O for 5 seconds... 00:17:36.866 73024.00 IOPS, 285.25 MiB/s [2024-11-20T15:10:38.637Z] 73568.00 IOPS, 287.38 MiB/s [2024-11-20T15:10:40.013Z] 71594.67 IOPS, 279.67 MiB/s [2024-11-20T15:10:40.580Z] 71376.00 IOPS, 278.81 MiB/s [2024-11-20T15:10:40.580Z] 71974.40 IOPS, 281.15 MiB/s 00:17:39.744 Latency(us) 00:17:39.744 [2024-11-20T15:10:40.580Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.744 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:17:39.744 xnvme_bdev : 5.00 71957.40 281.08 0.00 0.00 886.57 365.19 2487.21 00:17:39.744 [2024-11-20T15:10:40.580Z] =================================================================================================================== 00:17:39.744 [2024-11-20T15:10:40.580Z] Total : 71957.40 281.08 0.00 0.00 886.57 365.19 2487.21 00:17:41.121 15:10:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:41.121 15:10:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:41.121 15:10:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:17:41.121 15:10:41 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:41.121 15:10:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:41.121 { 00:17:41.121 "subsystems": [ 00:17:41.121 { 00:17:41.121 "subsystem": "bdev", 00:17:41.121 "config": [ 00:17:41.121 { 00:17:41.121 "params": { 00:17:41.121 "io_mechanism": "io_uring_cmd", 00:17:41.121 "conserve_cpu": true, 00:17:41.121 "filename": "/dev/ng0n1", 00:17:41.121 "name": "xnvme_bdev" 00:17:41.121 }, 00:17:41.121 "method": "bdev_xnvme_create" 00:17:41.121 }, 00:17:41.121 { 00:17:41.121 "method": "bdev_wait_for_examine" 00:17:41.121 } 00:17:41.121 ] 00:17:41.121 } 00:17:41.121 ] 00:17:41.121 } 00:17:41.380 [2024-11-20 15:10:41.981444] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:17:41.380 [2024-11-20 15:10:41.981629] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73483 ] 00:17:41.380 [2024-11-20 15:10:42.169662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.640 [2024-11-20 15:10:42.319097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:42.208 Running I/O for 5 seconds... 00:17:44.071 47534.00 IOPS, 185.68 MiB/s [2024-11-20T15:10:45.839Z] 47490.50 IOPS, 185.51 MiB/s [2024-11-20T15:10:46.770Z] 47929.67 IOPS, 187.23 MiB/s [2024-11-20T15:10:48.144Z] 48287.00 IOPS, 188.62 MiB/s [2024-11-20T15:10:48.144Z] 48445.80 IOPS, 189.24 MiB/s 00:17:47.308 Latency(us) 00:17:47.308 [2024-11-20T15:10:48.144Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:47.308 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:17:47.308 xnvme_bdev : 5.00 48433.93 189.20 0.00 0.00 1315.66 68.27 13896.79 00:17:47.308 [2024-11-20T15:10:48.144Z] =================================================================================================================== 00:17:47.308 [2024-11-20T15:10:48.144Z] Total : 48433.93 189.20 0.00 0.00 1315.66 68.27 13896.79 00:17:48.245 00:17:48.245 real 0m28.683s 00:17:48.245 user 0m17.201s 00:17:48.245 sys 0m8.940s 00:17:48.245 15:10:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:48.245 15:10:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:48.245 ************************************ 00:17:48.245 END TEST xnvme_bdevperf 00:17:48.245 ************************************ 00:17:48.506 15:10:49 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:17:48.506 15:10:49 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:48.506 15:10:49 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:48.506 15:10:49 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:48.506 ************************************ 00:17:48.506 START TEST xnvme_fio_plugin 00:17:48.506 ************************************ 00:17:48.506 15:10:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:17:48.506 15:10:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:17:48.506 15:10:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:17:48.506 15:10:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:48.506 15:10:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:48.506 15:10:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:48.506 15:10:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:48.506 15:10:49 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:48.506 15:10:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:48.506 15:10:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:48.506 15:10:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:48.506 15:10:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:48.506 15:10:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:48.506 15:10:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:48.506 15:10:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:48.506 15:10:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:48.506 15:10:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:48.506 15:10:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:48.506 15:10:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:48.506 15:10:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:48.506 15:10:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:48.506 15:10:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:48.506 15:10:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:48.506 15:10:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:48.506 { 00:17:48.506 "subsystems": [ 00:17:48.506 { 00:17:48.506 "subsystem": "bdev", 00:17:48.506 "config": [ 00:17:48.506 { 00:17:48.506 "params": { 00:17:48.506 "io_mechanism": "io_uring_cmd", 00:17:48.506 "conserve_cpu": true, 00:17:48.506 "filename": "/dev/ng0n1", 00:17:48.506 "name": "xnvme_bdev" 00:17:48.506 }, 00:17:48.506 "method": "bdev_xnvme_create" 00:17:48.506 }, 00:17:48.506 { 00:17:48.506 "method": "bdev_wait_for_examine" 00:17:48.506 } 00:17:48.506 ] 00:17:48.506 } 00:17:48.506 ] 00:17:48.506 } 00:17:48.766 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:48.766 fio-3.35 00:17:48.766 Starting 1 thread 00:17:55.342 00:17:55.342 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73607: Wed Nov 20 15:10:55 2024 00:17:55.342 read: IOPS=37.7k, BW=147MiB/s (154MB/s)(736MiB/5001msec) 00:17:55.342 slat (nsec): min=2422, max=86956, avg=4780.42, stdev=2179.00 00:17:55.342 clat (usec): min=741, max=3712, avg=1507.24, stdev=360.78 00:17:55.342 lat (usec): min=744, max=3717, avg=1512.02, stdev=362.04 00:17:55.342 clat percentiles (usec): 00:17:55.342 | 1.00th=[ 914], 5.00th=[ 1012], 10.00th=[ 1090], 20.00th=[ 1188], 00:17:55.342 | 30.00th=[ 1270], 40.00th=[ 1369], 50.00th=[ 1450], 60.00th=[ 1565], 00:17:55.342 | 70.00th=[ 1680], 80.00th=[ 1811], 90.00th=[ 2008], 95.00th=[ 2180], 00:17:55.342 | 99.00th=[ 2474], 99.50th=[ 2606], 99.90th=[ 2835], 99.95th=[ 2933], 00:17:55.342 | 99.99th=[ 3130] 00:17:55.342 bw ( KiB/s): min=128512, max=193024, per=100.00%, avg=151380.44, stdev=17837.66, samples=9 00:17:55.342 iops : min=32128, max=48256, avg=37845.11, stdev=4459.41, samples=9 00:17:55.342 lat (usec) : 750=0.01%, 1000=4.19% 00:17:55.342 lat (msec) : 2=85.50%, 4=10.31% 00:17:55.342 cpu : usr=51.64%, sys=45.72%, ctx=10, majf=0, minf=762 00:17:55.342 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:17:55.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:55.342 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:17:55.342 issued rwts: total=188479,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:55.342 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:55.342 00:17:55.342 Run status group 0 (all jobs): 00:17:55.342 READ: bw=147MiB/s (154MB/s), 147MiB/s-147MiB/s (154MB/s-154MB/s), io=736MiB (772MB), run=5001-5001msec 00:17:56.282 ----------------------------------------------------- 00:17:56.282 Suppressions used: 00:17:56.282 count bytes template 00:17:56.282 1 11 /usr/src/fio/parse.c 00:17:56.282 1 8 libtcmalloc_minimal.so 00:17:56.282 1 904 libcrypto.so 00:17:56.282 ----------------------------------------------------- 00:17:56.282 00:17:56.282 15:10:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:56.282 15:10:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:56.282 15:10:56 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:56.282 15:10:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:56.282 15:10:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:56.282 15:10:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:56.282 15:10:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:56.282 15:10:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:56.282 15:10:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:56.282 15:10:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:56.282 15:10:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:56.282 15:10:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:56.282 15:10:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:56.282 15:10:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:56.282 15:10:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:56.282 15:10:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:56.282 { 00:17:56.282 "subsystems": [ 00:17:56.282 { 00:17:56.282 "subsystem": "bdev", 00:17:56.282 "config": [ 00:17:56.282 { 00:17:56.282 "params": { 00:17:56.282 "io_mechanism": "io_uring_cmd", 00:17:56.282 "conserve_cpu": true, 00:17:56.282 "filename": "/dev/ng0n1", 00:17:56.282 "name": "xnvme_bdev" 00:17:56.282 }, 00:17:56.282 "method": "bdev_xnvme_create" 00:17:56.282 }, 00:17:56.282 { 00:17:56.282 "method": "bdev_wait_for_examine" 00:17:56.282 } 00:17:56.282 ] 00:17:56.282 } 00:17:56.282 ] 00:17:56.282 } 00:17:56.282 15:10:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:56.282 15:10:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:56.282 15:10:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:56.282 15:10:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:56.282 15:10:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:56.282 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:56.282 fio-3.35 00:17:56.282 Starting 1 thread 00:18:02.852 00:18:02.852 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73708: Wed Nov 20 15:11:02 2024 00:18:02.852 write: IOPS=28.2k, BW=110MiB/s (115MB/s)(550MiB/5002msec); 0 zone resets 00:18:02.852 slat (usec): min=2, max=229, avg= 7.00, stdev= 3.29 00:18:02.852 clat (usec): min=303, max=7566, avg=1995.85, stdev=361.00 00:18:02.852 lat (usec): min=309, max=7574, avg=2002.85, stdev=362.28 00:18:02.852 clat percentiles (usec): 00:18:02.852 | 1.00th=[ 1123], 5.00th=[ 1434], 10.00th=[ 1582], 20.00th=[ 1729], 00:18:02.852 | 30.00th=[ 1827], 40.00th=[ 1909], 50.00th=[ 1991], 60.00th=[ 2073], 00:18:02.852 | 70.00th=[ 2147], 80.00th=[ 2245], 90.00th=[ 2409], 95.00th=[ 2540], 00:18:02.852 | 99.00th=[ 2802], 99.50th=[ 2966], 99.90th=[ 4228], 99.95th=[ 5866], 00:18:02.852 | 99.99th=[ 7242] 00:18:02.852 bw ( KiB/s): min=102195, max=122368, per=100.00%, avg=112873.22, stdev=7140.55, samples=9 00:18:02.852 iops : min=25548, max=30592, avg=28218.22, stdev=1785.28, samples=9 00:18:02.852 lat (usec) : 500=0.02%, 750=0.01%, 1000=0.15% 00:18:02.852 lat (msec) : 2=50.86%, 4=48.84%, 10=0.11% 00:18:02.852 cpu : usr=54.95%, sys=41.87%, ctx=11, majf=0, minf=763 00:18:02.852 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:18:02.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:02.852 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:18:02.852 issued rwts: total=0,140922,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:02.852 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:02.852 00:18:02.852 Run status group 0 (all jobs): 00:18:02.852 WRITE: bw=110MiB/s (115MB/s), 110MiB/s-110MiB/s (115MB/s-115MB/s), io=550MiB (577MB), run=5002-5002msec 00:18:03.789 ----------------------------------------------------- 00:18:03.789 Suppressions used: 00:18:03.789 count bytes template 00:18:03.789 1 11 /usr/src/fio/parse.c 00:18:03.789 1 8 libtcmalloc_minimal.so 00:18:03.789 1 904 libcrypto.so 00:18:03.789 ----------------------------------------------------- 00:18:03.789 00:18:03.789 00:18:03.789 real 0m15.348s 00:18:03.789 user 0m9.431s 00:18:03.789 sys 0m5.365s 00:18:03.789 15:11:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:03.789 ************************************ 00:18:03.789 END TEST xnvme_fio_plugin 00:18:03.789 ************************************ 00:18:03.789 15:11:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:03.789 Process with pid 73153 is not found 00:18:03.789 15:11:04 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 73153 00:18:03.789 15:11:04 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73153 ']' 00:18:03.789 15:11:04 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 73153 00:18:03.789 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (73153) - No such process 00:18:03.789 15:11:04 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 73153 is not found' 00:18:03.789 00:18:03.789 real 3m58.758s 00:18:03.789 user 2m8.584s 00:18:03.789 sys 1m33.529s 00:18:03.789 15:11:04 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:03.789 15:11:04 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:03.789 ************************************ 00:18:03.789 END TEST nvme_xnvme 00:18:03.789 ************************************ 00:18:03.789 15:11:04 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:18:03.789 15:11:04 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:03.789 15:11:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:03.789 15:11:04 -- common/autotest_common.sh@10 -- # set +x 00:18:03.789 ************************************ 00:18:03.789 START TEST blockdev_xnvme 00:18:03.789 ************************************ 00:18:03.789 15:11:04 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:18:04.049 * Looking for test storage... 00:18:04.049 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:18:04.049 15:11:04 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:04.049 15:11:04 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:18:04.049 15:11:04 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:04.049 15:11:04 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:04.049 15:11:04 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:04.049 15:11:04 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:04.049 15:11:04 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:04.049 15:11:04 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:18:04.049 15:11:04 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:18:04.049 15:11:04 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:18:04.049 15:11:04 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:18:04.049 15:11:04 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:18:04.049 15:11:04 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:18:04.049 15:11:04 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:18:04.049 15:11:04 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:04.049 15:11:04 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:18:04.049 15:11:04 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:18:04.049 15:11:04 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:04.049 15:11:04 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:04.049 15:11:04 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:18:04.049 15:11:04 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:18:04.049 15:11:04 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:04.049 15:11:04 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:18:04.049 15:11:04 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:18:04.049 15:11:04 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:18:04.049 15:11:04 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:18:04.049 15:11:04 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:04.049 15:11:04 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:18:04.049 15:11:04 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:18:04.049 15:11:04 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:04.049 15:11:04 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:04.049 15:11:04 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:18:04.049 15:11:04 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:04.049 15:11:04 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:04.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.049 --rc genhtml_branch_coverage=1 00:18:04.049 --rc genhtml_function_coverage=1 00:18:04.049 --rc genhtml_legend=1 00:18:04.049 --rc geninfo_all_blocks=1 00:18:04.049 --rc geninfo_unexecuted_blocks=1 00:18:04.049 00:18:04.049 ' 00:18:04.049 15:11:04 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:04.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.049 --rc genhtml_branch_coverage=1 00:18:04.049 --rc genhtml_function_coverage=1 00:18:04.049 --rc genhtml_legend=1 00:18:04.049 --rc geninfo_all_blocks=1 00:18:04.049 --rc geninfo_unexecuted_blocks=1 00:18:04.049 00:18:04.049 ' 00:18:04.049 15:11:04 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:04.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.049 --rc genhtml_branch_coverage=1 00:18:04.049 --rc genhtml_function_coverage=1 00:18:04.049 --rc genhtml_legend=1 00:18:04.049 --rc geninfo_all_blocks=1 00:18:04.049 --rc geninfo_unexecuted_blocks=1 00:18:04.049 00:18:04.049 ' 00:18:04.049 15:11:04 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:04.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.049 --rc genhtml_branch_coverage=1 00:18:04.049 --rc genhtml_function_coverage=1 00:18:04.049 --rc genhtml_legend=1 00:18:04.049 --rc geninfo_all_blocks=1 00:18:04.049 --rc geninfo_unexecuted_blocks=1 00:18:04.049 00:18:04.049 ' 00:18:04.049 15:11:04 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:18:04.049 15:11:04 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:18:04.049 15:11:04 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:18:04.049 15:11:04 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:04.049 15:11:04 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:18:04.049 15:11:04 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:18:04.049 15:11:04 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:18:04.049 15:11:04 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:18:04.049 15:11:04 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:18:04.049 15:11:04 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:18:04.049 15:11:04 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:18:04.049 15:11:04 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:18:04.049 15:11:04 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:18:04.049 15:11:04 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:18:04.049 15:11:04 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:18:04.049 15:11:04 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:18:04.049 15:11:04 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:18:04.049 15:11:04 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:18:04.049 15:11:04 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:18:04.049 15:11:04 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:18:04.049 15:11:04 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:18:04.049 15:11:04 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:18:04.049 15:11:04 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:18:04.049 15:11:04 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:18:04.049 15:11:04 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73842 00:18:04.049 15:11:04 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:18:04.049 15:11:04 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 73842 00:18:04.049 15:11:04 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 73842 ']' 00:18:04.049 15:11:04 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:04.049 15:11:04 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:04.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:04.049 15:11:04 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:04.049 15:11:04 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:04.049 15:11:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:04.049 15:11:04 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:18:04.309 [2024-11-20 15:11:04.957591] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:18:04.309 [2024-11-20 15:11:04.957760] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73842 ] 00:18:04.569 [2024-11-20 15:11:05.146062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.569 [2024-11-20 15:11:05.300963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.526 15:11:06 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:05.526 15:11:06 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:18:05.526 15:11:06 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:18:05.526 15:11:06 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:18:05.526 15:11:06 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:18:05.527 15:11:06 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:18:05.527 15:11:06 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:06.468 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:07.036 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:18:07.036 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:18:07.036 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:18:07.036 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:18:07.036 15:11:07 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:18:07.036 15:11:07 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:18:07.036 15:11:07 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:18:07.036 15:11:07 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:18:07.036 15:11:07 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:18:07.036 15:11:07 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0c0n1 00:18:07.036 15:11:07 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0c0n1 00:18:07.036 15:11:07 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:18:07.036 15:11:07 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:07.036 15:11:07 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:18:07.036 15:11:07 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:18:07.036 15:11:07 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:18:07.036 15:11:07 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:18:07.036 15:11:07 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:07.036 15:11:07 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:18:07.036 15:11:07 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:18:07.036 15:11:07 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:18:07.036 15:11:07 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:18:07.036 15:11:07 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:07.036 15:11:07 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:18:07.036 15:11:07 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:18:07.036 15:11:07 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:18:07.036 15:11:07 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:18:07.036 15:11:07 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:07.036 15:11:07 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:18:07.036 15:11:07 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:18:07.036 15:11:07 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:18:07.036 15:11:07 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:18:07.036 15:11:07 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:07.036 15:11:07 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:18:07.036 15:11:07 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:18:07.036 15:11:07 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:18:07.036 15:11:07 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:18:07.036 15:11:07 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:07.036 15:11:07 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:18:07.036 15:11:07 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:18:07.036 15:11:07 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:18:07.036 15:11:07 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:18:07.036 15:11:07 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:07.036 15:11:07 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:18:07.036 15:11:07 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:18:07.036 15:11:07 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:18:07.036 15:11:07 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:18:07.036 15:11:07 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:18:07.036 15:11:07 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:18:07.036 15:11:07 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:18:07.036 15:11:07 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:18:07.037 15:11:07 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:18:07.037 15:11:07 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n2 ]] 00:18:07.037 15:11:07 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:18:07.037 15:11:07 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:18:07.037 15:11:07 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:18:07.037 15:11:07 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n3 ]] 00:18:07.037 15:11:07 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:18:07.037 15:11:07 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:18:07.037 15:11:07 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:18:07.037 15:11:07 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:18:07.037 15:11:07 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:18:07.037 15:11:07 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:18:07.037 15:11:07 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:18:07.037 15:11:07 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:18:07.037 15:11:07 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:18:07.037 15:11:07 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:18:07.037 15:11:07 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:18:07.037 15:11:07 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:18:07.037 15:11:07 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.037 15:11:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:07.037 15:11:07 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme1n2 nvme1n2 io_uring -c' 'bdev_xnvme_create /dev/nvme1n3 nvme1n3 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:18:07.298 nvme0n1 00:18:07.298 nvme1n1 00:18:07.298 nvme1n2 00:18:07.298 nvme1n3 00:18:07.298 nvme2n1 00:18:07.298 nvme3n1 00:18:07.298 15:11:07 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.298 15:11:07 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:18:07.298 15:11:07 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.298 15:11:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:07.298 15:11:07 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.298 15:11:07 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:18:07.298 15:11:07 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:18:07.298 15:11:07 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.298 15:11:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:07.298 15:11:07 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.298 15:11:07 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:18:07.298 15:11:07 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.298 15:11:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:07.298 15:11:07 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.298 15:11:07 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:18:07.298 15:11:07 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.298 15:11:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:07.298 15:11:08 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.298 15:11:08 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:18:07.298 15:11:08 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:18:07.298 15:11:08 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:18:07.298 15:11:08 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.298 15:11:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:07.298 15:11:08 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.298 15:11:08 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:18:07.298 15:11:08 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "8d297c41-6edd-41a7-9922-70d56d3f1bb2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "8d297c41-6edd-41a7-9922-70d56d3f1bb2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "85ea674a-50d4-4907-a1c0-0940a820b7ab"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "85ea674a-50d4-4907-a1c0-0940a820b7ab",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "78c2e1d2-307a-447a-9556-c553d37edac9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "78c2e1d2-307a-447a-9556-c553d37edac9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "a3952c21-5851-4f19-a150-fa8c851e70c9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a3952c21-5851-4f19-a150-fa8c851e70c9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "2545cb6a-41cd-49a1-bb63-b8551befaa75"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "2545cb6a-41cd-49a1-bb63-b8551befaa75",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "c12b3067-b0f1-457c-bcea-cb6ad90f73de"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "c12b3067-b0f1-457c-bcea-cb6ad90f73de",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:18:07.298 15:11:08 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:18:07.558 15:11:08 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:18:07.558 15:11:08 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:18:07.558 15:11:08 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:18:07.558 15:11:08 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 73842 00:18:07.558 15:11:08 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73842 ']' 00:18:07.558 15:11:08 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 73842 00:18:07.558 15:11:08 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:18:07.558 15:11:08 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:07.558 15:11:08 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73842 00:18:07.558 killing process with pid 73842 00:18:07.558 15:11:08 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:07.558 15:11:08 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:07.558 15:11:08 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73842' 00:18:07.558 15:11:08 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 73842 00:18:07.558 15:11:08 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 73842 00:18:10.101 15:11:10 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:10.101 15:11:10 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:18:10.101 15:11:10 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:10.101 15:11:10 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:10.101 15:11:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:10.101 ************************************ 00:18:10.101 START TEST bdev_hello_world 00:18:10.101 ************************************ 00:18:10.102 15:11:10 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:18:10.360 [2024-11-20 15:11:10.992740] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:18:10.360 [2024-11-20 15:11:10.992898] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74144 ] 00:18:10.360 [2024-11-20 15:11:11.179701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.618 [2024-11-20 15:11:11.328411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:11.186 [2024-11-20 15:11:11.833790] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:18:11.186 [2024-11-20 15:11:11.833845] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:18:11.186 [2024-11-20 15:11:11.833865] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:18:11.186 [2024-11-20 15:11:11.836246] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:18:11.186 [2024-11-20 15:11:11.836729] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:18:11.186 [2024-11-20 15:11:11.836759] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:18:11.186 [2024-11-20 15:11:11.836995] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:18:11.186 00:18:11.186 [2024-11-20 15:11:11.837023] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:18:12.567 00:18:12.567 real 0m2.193s 00:18:12.567 user 0m1.733s 00:18:12.567 sys 0m0.342s 00:18:12.567 15:11:13 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:12.567 15:11:13 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:18:12.567 ************************************ 00:18:12.567 END TEST bdev_hello_world 00:18:12.567 ************************************ 00:18:12.567 15:11:13 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:18:12.567 15:11:13 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:12.567 15:11:13 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:12.567 15:11:13 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:12.567 ************************************ 00:18:12.567 START TEST bdev_bounds 00:18:12.567 ************************************ 00:18:12.567 15:11:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:18:12.567 15:11:13 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=74186 00:18:12.567 15:11:13 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:12.567 15:11:13 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:18:12.567 Process bdevio pid: 74186 00:18:12.567 15:11:13 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 74186' 00:18:12.567 15:11:13 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 74186 00:18:12.567 15:11:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 74186 ']' 00:18:12.567 15:11:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:12.567 15:11:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:12.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:12.567 15:11:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:12.567 15:11:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:12.567 15:11:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:12.567 [2024-11-20 15:11:13.256357] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:18:12.567 [2024-11-20 15:11:13.256524] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74186 ] 00:18:12.827 [2024-11-20 15:11:13.444972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:12.827 [2024-11-20 15:11:13.598922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:12.827 [2024-11-20 15:11:13.599081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:12.827 [2024-11-20 15:11:13.599113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:13.394 15:11:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:13.394 15:11:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:18:13.394 15:11:14 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:18:13.653 I/O targets: 00:18:13.653 nvme0n1: 262144 blocks of 4096 bytes (1024 MiB) 00:18:13.653 nvme1n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:18:13.653 nvme1n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:18:13.653 nvme1n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:18:13.653 nvme2n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:18:13.653 nvme3n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:18:13.653 00:18:13.653 00:18:13.653 CUnit - A unit testing framework for C - Version 2.1-3 00:18:13.653 http://cunit.sourceforge.net/ 00:18:13.653 00:18:13.653 00:18:13.653 Suite: bdevio tests on: nvme3n1 00:18:13.653 Test: blockdev write read block ...passed 00:18:13.653 Test: blockdev write zeroes read block ...passed 00:18:13.653 Test: blockdev write zeroes read no split ...passed 00:18:13.653 Test: blockdev write zeroes read split ...passed 00:18:13.653 Test: blockdev write zeroes read split partial ...passed 00:18:13.653 Test: blockdev reset ...passed 00:18:13.653 Test: blockdev write read 8 blocks ...passed 00:18:13.653 Test: blockdev write read size > 128k ...passed 00:18:13.653 Test: blockdev write read invalid size ...passed 00:18:13.653 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:13.653 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:13.653 Test: blockdev write read max offset ...passed 00:18:13.653 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:13.653 Test: blockdev writev readv 8 blocks ...passed 00:18:13.653 Test: blockdev writev readv 30 x 1block ...passed 00:18:13.653 Test: blockdev writev readv block ...passed 00:18:13.653 Test: blockdev writev readv size > 128k ...passed 00:18:13.653 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:13.653 Test: blockdev comparev and writev ...passed 00:18:13.653 Test: blockdev nvme passthru rw ...passed 00:18:13.653 Test: blockdev nvme passthru vendor specific ...passed 00:18:13.653 Test: blockdev nvme admin passthru ...passed 00:18:13.653 Test: blockdev copy ...passed 00:18:13.653 Suite: bdevio tests on: nvme2n1 00:18:13.654 Test: blockdev write read block ...passed 00:18:13.654 Test: blockdev write zeroes read block ...passed 00:18:13.654 Test: blockdev write zeroes read no split ...passed 00:18:13.654 Test: blockdev write zeroes read split ...passed 00:18:13.654 Test: blockdev write zeroes read split partial ...passed 00:18:13.654 Test: blockdev reset ...passed 00:18:13.654 Test: blockdev write read 8 blocks ...passed 00:18:13.654 Test: blockdev write read size > 128k ...passed 00:18:13.654 Test: blockdev write read invalid size ...passed 00:18:13.654 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:13.654 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:13.654 Test: blockdev write read max offset ...passed 00:18:13.654 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:13.654 Test: blockdev writev readv 8 blocks ...passed 00:18:13.654 Test: blockdev writev readv 30 x 1block ...passed 00:18:13.654 Test: blockdev writev readv block ...passed 00:18:13.654 Test: blockdev writev readv size > 128k ...passed 00:18:13.654 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:13.654 Test: blockdev comparev and writev ...passed 00:18:13.654 Test: blockdev nvme passthru rw ...passed 00:18:13.654 Test: blockdev nvme passthru vendor specific ...passed 00:18:13.654 Test: blockdev nvme admin passthru ...passed 00:18:13.654 Test: blockdev copy ...passed 00:18:13.654 Suite: bdevio tests on: nvme1n3 00:18:13.654 Test: blockdev write read block ...passed 00:18:13.654 Test: blockdev write zeroes read block ...passed 00:18:13.654 Test: blockdev write zeroes read no split ...passed 00:18:13.913 Test: blockdev write zeroes read split ...passed 00:18:13.913 Test: blockdev write zeroes read split partial ...passed 00:18:13.913 Test: blockdev reset ...passed 00:18:13.913 Test: blockdev write read 8 blocks ...passed 00:18:13.913 Test: blockdev write read size > 128k ...passed 00:18:13.913 Test: blockdev write read invalid size ...passed 00:18:13.913 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:13.913 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:13.913 Test: blockdev write read max offset ...passed 00:18:13.913 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:13.913 Test: blockdev writev readv 8 blocks ...passed 00:18:13.913 Test: blockdev writev readv 30 x 1block ...passed 00:18:13.913 Test: blockdev writev readv block ...passed 00:18:13.913 Test: blockdev writev readv size > 128k ...passed 00:18:13.913 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:13.913 Test: blockdev comparev and writev ...passed 00:18:13.913 Test: blockdev nvme passthru rw ...passed 00:18:13.913 Test: blockdev nvme passthru vendor specific ...passed 00:18:13.913 Test: blockdev nvme admin passthru ...passed 00:18:13.913 Test: blockdev copy ...passed 00:18:13.913 Suite: bdevio tests on: nvme1n2 00:18:13.913 Test: blockdev write read block ...passed 00:18:13.913 Test: blockdev write zeroes read block ...passed 00:18:13.913 Test: blockdev write zeroes read no split ...passed 00:18:13.913 Test: blockdev write zeroes read split ...passed 00:18:13.913 Test: blockdev write zeroes read split partial ...passed 00:18:13.913 Test: blockdev reset ...passed 00:18:13.913 Test: blockdev write read 8 blocks ...passed 00:18:13.913 Test: blockdev write read size > 128k ...passed 00:18:13.913 Test: blockdev write read invalid size ...passed 00:18:13.913 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:13.913 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:13.913 Test: blockdev write read max offset ...passed 00:18:13.913 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:13.913 Test: blockdev writev readv 8 blocks ...passed 00:18:13.913 Test: blockdev writev readv 30 x 1block ...passed 00:18:13.913 Test: blockdev writev readv block ...passed 00:18:13.913 Test: blockdev writev readv size > 128k ...passed 00:18:13.913 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:13.913 Test: blockdev comparev and writev ...passed 00:18:13.913 Test: blockdev nvme passthru rw ...passed 00:18:13.913 Test: blockdev nvme passthru vendor specific ...passed 00:18:13.914 Test: blockdev nvme admin passthru ...passed 00:18:13.914 Test: blockdev copy ...passed 00:18:13.914 Suite: bdevio tests on: nvme1n1 00:18:13.914 Test: blockdev write read block ...passed 00:18:13.914 Test: blockdev write zeroes read block ...passed 00:18:13.914 Test: blockdev write zeroes read no split ...passed 00:18:13.914 Test: blockdev write zeroes read split ...passed 00:18:13.914 Test: blockdev write zeroes read split partial ...passed 00:18:13.914 Test: blockdev reset ...passed 00:18:13.914 Test: blockdev write read 8 blocks ...passed 00:18:13.914 Test: blockdev write read size > 128k ...passed 00:18:13.914 Test: blockdev write read invalid size ...passed 00:18:13.914 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:13.914 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:13.914 Test: blockdev write read max offset ...passed 00:18:13.914 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:13.914 Test: blockdev writev readv 8 blocks ...passed 00:18:13.914 Test: blockdev writev readv 30 x 1block ...passed 00:18:13.914 Test: blockdev writev readv block ...passed 00:18:13.914 Test: blockdev writev readv size > 128k ...passed 00:18:13.914 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:13.914 Test: blockdev comparev and writev ...passed 00:18:13.914 Test: blockdev nvme passthru rw ...passed 00:18:13.914 Test: blockdev nvme passthru vendor specific ...passed 00:18:13.914 Test: blockdev nvme admin passthru ...passed 00:18:13.914 Test: blockdev copy ...passed 00:18:13.914 Suite: bdevio tests on: nvme0n1 00:18:13.914 Test: blockdev write read block ...passed 00:18:13.914 Test: blockdev write zeroes read block ...passed 00:18:13.914 Test: blockdev write zeroes read no split ...passed 00:18:13.914 Test: blockdev write zeroes read split ...passed 00:18:14.173 Test: blockdev write zeroes read split partial ...passed 00:18:14.173 Test: blockdev reset ...passed 00:18:14.173 Test: blockdev write read 8 blocks ...passed 00:18:14.173 Test: blockdev write read size > 128k ...passed 00:18:14.173 Test: blockdev write read invalid size ...passed 00:18:14.173 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:14.173 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:14.173 Test: blockdev write read max offset ...passed 00:18:14.173 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:14.173 Test: blockdev writev readv 8 blocks ...passed 00:18:14.173 Test: blockdev writev readv 30 x 1block ...passed 00:18:14.173 Test: blockdev writev readv block ...passed 00:18:14.173 Test: blockdev writev readv size > 128k ...passed 00:18:14.173 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:14.173 Test: blockdev comparev and writev ...passed 00:18:14.173 Test: blockdev nvme passthru rw ...passed 00:18:14.173 Test: blockdev nvme passthru vendor specific ...passed 00:18:14.173 Test: blockdev nvme admin passthru ...passed 00:18:14.173 Test: blockdev copy ...passed 00:18:14.173 00:18:14.173 Run Summary: Type Total Ran Passed Failed Inactive 00:18:14.173 suites 6 6 n/a 0 0 00:18:14.173 tests 138 138 138 0 0 00:18:14.173 asserts 780 780 780 0 n/a 00:18:14.173 00:18:14.173 Elapsed time = 1.469 seconds 00:18:14.173 0 00:18:14.173 15:11:14 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 74186 00:18:14.173 15:11:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 74186 ']' 00:18:14.173 15:11:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 74186 00:18:14.173 15:11:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:18:14.173 15:11:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:14.173 15:11:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74186 00:18:14.173 15:11:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:14.173 15:11:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:14.173 killing process with pid 74186 00:18:14.173 15:11:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74186' 00:18:14.173 15:11:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 74186 00:18:14.173 15:11:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 74186 00:18:15.553 15:11:16 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:18:15.553 00:18:15.553 real 0m2.945s 00:18:15.553 user 0m7.197s 00:18:15.553 sys 0m0.525s 00:18:15.553 ************************************ 00:18:15.553 END TEST bdev_bounds 00:18:15.553 ************************************ 00:18:15.553 15:11:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:15.553 15:11:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:15.553 15:11:16 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:18:15.553 15:11:16 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:15.553 15:11:16 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:15.553 15:11:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:15.553 ************************************ 00:18:15.553 START TEST bdev_nbd 00:18:15.553 ************************************ 00:18:15.553 15:11:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:18:15.553 15:11:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:18:15.553 15:11:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:18:15.553 15:11:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:15.553 15:11:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:15.553 15:11:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:18:15.553 15:11:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:18:15.553 15:11:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:18:15.553 15:11:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:18:15.553 15:11:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:18:15.553 15:11:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:18:15.553 15:11:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:18:15.553 15:11:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:18:15.553 15:11:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:18:15.553 15:11:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:18:15.553 15:11:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:18:15.553 15:11:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=74246 00:18:15.553 15:11:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:15.553 15:11:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:18:15.553 15:11:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 74246 /var/tmp/spdk-nbd.sock 00:18:15.553 15:11:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 74246 ']' 00:18:15.553 15:11:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:18:15.553 15:11:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:15.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:18:15.553 15:11:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:18:15.553 15:11:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:15.553 15:11:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:15.553 [2024-11-20 15:11:16.284405] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:18:15.553 [2024-11-20 15:11:16.284563] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:15.811 [2024-11-20 15:11:16.466981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.811 [2024-11-20 15:11:16.613366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:16.377 15:11:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:16.377 15:11:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:18:16.377 15:11:17 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:18:16.377 15:11:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:16.377 15:11:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:18:16.377 15:11:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:18:16.377 15:11:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:18:16.377 15:11:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:16.377 15:11:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:18:16.377 15:11:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:18:16.377 15:11:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:18:16.377 15:11:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:18:16.377 15:11:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:18:16.377 15:11:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:16.377 15:11:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:18:16.636 15:11:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:18:16.636 15:11:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:18:16.636 15:11:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:18:16.636 15:11:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:16.636 15:11:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:16.636 15:11:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:16.636 15:11:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:16.636 15:11:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:16.636 15:11:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:16.636 15:11:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:16.636 15:11:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:16.636 15:11:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:16.636 1+0 records in 00:18:16.636 1+0 records out 00:18:16.636 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000761956 s, 5.4 MB/s 00:18:16.636 15:11:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:16.636 15:11:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:16.636 15:11:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:16.636 15:11:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:16.636 15:11:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:16.636 15:11:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:16.636 15:11:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:16.636 15:11:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:18:16.894 15:11:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:18:16.894 15:11:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:18:16.894 15:11:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:18:16.894 15:11:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:16.894 15:11:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:16.894 15:11:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:16.895 15:11:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:16.895 15:11:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:16.895 15:11:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:16.895 15:11:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:16.895 15:11:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:16.895 15:11:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:16.895 1+0 records in 00:18:16.895 1+0 records out 00:18:16.895 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000696757 s, 5.9 MB/s 00:18:16.895 15:11:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:16.895 15:11:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:16.895 15:11:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:16.895 15:11:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:16.895 15:11:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:16.895 15:11:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:16.895 15:11:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:16.895 15:11:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 00:18:17.153 15:11:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:18:17.153 15:11:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:18:17.153 15:11:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:18:17.153 15:11:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:18:17.153 15:11:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:17.153 15:11:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:17.153 15:11:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:17.153 15:11:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:18:17.153 15:11:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:17.153 15:11:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:17.153 15:11:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:17.153 15:11:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:17.153 1+0 records in 00:18:17.153 1+0 records out 00:18:17.153 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000729174 s, 5.6 MB/s 00:18:17.153 15:11:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:17.153 15:11:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:17.153 15:11:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:17.153 15:11:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:17.153 15:11:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:17.153 15:11:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:17.153 15:11:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:17.412 15:11:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 00:18:17.412 15:11:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:18:17.412 15:11:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:18:17.412 15:11:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:18:17.412 15:11:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:18:17.412 15:11:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:17.412 15:11:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:17.412 15:11:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:17.412 15:11:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:18:17.412 15:11:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:17.412 15:11:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:17.412 15:11:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:17.412 15:11:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:17.412 1+0 records in 00:18:17.412 1+0 records out 00:18:17.412 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000868018 s, 4.7 MB/s 00:18:17.670 15:11:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:17.670 15:11:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:17.670 15:11:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:17.670 15:11:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:17.670 15:11:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:17.670 15:11:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:17.670 15:11:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:17.670 15:11:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:18:17.670 15:11:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:18:17.670 15:11:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:18:17.670 15:11:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:18:17.670 15:11:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:18:17.670 15:11:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:17.670 15:11:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:17.670 15:11:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:17.670 15:11:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:18:17.670 15:11:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:17.670 15:11:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:17.670 15:11:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:17.670 15:11:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:17.931 1+0 records in 00:18:17.931 1+0 records out 00:18:17.931 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000799498 s, 5.1 MB/s 00:18:17.931 15:11:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:17.931 15:11:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:17.931 15:11:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:17.931 15:11:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:17.931 15:11:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:17.931 15:11:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:17.931 15:11:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:17.931 15:11:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:18:17.931 15:11:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:18:17.931 15:11:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:18:18.189 15:11:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:18:18.189 15:11:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:18:18.189 15:11:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:18.189 15:11:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:18.189 15:11:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:18.189 15:11:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:18:18.189 15:11:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:18.189 15:11:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:18.189 15:11:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:18.189 15:11:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:18.189 1+0 records in 00:18:18.189 1+0 records out 00:18:18.189 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00109198 s, 3.8 MB/s 00:18:18.189 15:11:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:18.189 15:11:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:18.189 15:11:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:18.189 15:11:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:18.189 15:11:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:18.189 15:11:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:18.189 15:11:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:18.189 15:11:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:18.189 15:11:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:18:18.189 { 00:18:18.189 "nbd_device": "/dev/nbd0", 00:18:18.189 "bdev_name": "nvme0n1" 00:18:18.189 }, 00:18:18.189 { 00:18:18.189 "nbd_device": "/dev/nbd1", 00:18:18.189 "bdev_name": "nvme1n1" 00:18:18.189 }, 00:18:18.189 { 00:18:18.189 "nbd_device": "/dev/nbd2", 00:18:18.189 "bdev_name": "nvme1n2" 00:18:18.189 }, 00:18:18.189 { 00:18:18.189 "nbd_device": "/dev/nbd3", 00:18:18.189 "bdev_name": "nvme1n3" 00:18:18.189 }, 00:18:18.190 { 00:18:18.190 "nbd_device": "/dev/nbd4", 00:18:18.190 "bdev_name": "nvme2n1" 00:18:18.190 }, 00:18:18.190 { 00:18:18.190 "nbd_device": "/dev/nbd5", 00:18:18.190 "bdev_name": "nvme3n1" 00:18:18.190 } 00:18:18.190 ]' 00:18:18.190 15:11:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:18:18.190 15:11:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:18:18.190 15:11:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:18:18.190 { 00:18:18.190 "nbd_device": "/dev/nbd0", 00:18:18.190 "bdev_name": "nvme0n1" 00:18:18.190 }, 00:18:18.190 { 00:18:18.190 "nbd_device": "/dev/nbd1", 00:18:18.190 "bdev_name": "nvme1n1" 00:18:18.190 }, 00:18:18.190 { 00:18:18.190 "nbd_device": "/dev/nbd2", 00:18:18.190 "bdev_name": "nvme1n2" 00:18:18.190 }, 00:18:18.190 { 00:18:18.190 "nbd_device": "/dev/nbd3", 00:18:18.190 "bdev_name": "nvme1n3" 00:18:18.190 }, 00:18:18.190 { 00:18:18.190 "nbd_device": "/dev/nbd4", 00:18:18.190 "bdev_name": "nvme2n1" 00:18:18.190 }, 00:18:18.190 { 00:18:18.190 "nbd_device": "/dev/nbd5", 00:18:18.190 "bdev_name": "nvme3n1" 00:18:18.190 } 00:18:18.190 ]' 00:18:18.449 15:11:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:18:18.449 15:11:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:18.449 15:11:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:18:18.449 15:11:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:18.449 15:11:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:18.449 15:11:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:18.449 15:11:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:18.449 15:11:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:18.707 15:11:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:18.707 15:11:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:18.707 15:11:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:18.707 15:11:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:18.707 15:11:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:18.707 15:11:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:18.707 15:11:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:18.707 15:11:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:18.707 15:11:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:18:18.966 15:11:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:18.966 15:11:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:18.966 15:11:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:18.966 15:11:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:18.966 15:11:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:18.966 15:11:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:18.966 15:11:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:18.966 15:11:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:18.966 15:11:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:18.966 15:11:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:18:19.223 15:11:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:18:19.223 15:11:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:18:19.223 15:11:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:18:19.223 15:11:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:19.223 15:11:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:19.223 15:11:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:18:19.223 15:11:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:19.223 15:11:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:19.223 15:11:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:19.223 15:11:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:18:19.481 15:11:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:18:19.481 15:11:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:18:19.481 15:11:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:18:19.481 15:11:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:19.481 15:11:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:19.481 15:11:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:18:19.481 15:11:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:19.481 15:11:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:19.481 15:11:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:19.481 15:11:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:18:19.739 15:11:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:18:19.739 15:11:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:18:19.739 15:11:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:18:19.739 15:11:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:19.739 15:11:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:19.739 15:11:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:18:19.739 15:11:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:19.739 15:11:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:19.739 15:11:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:19.739 15:11:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:18:19.997 15:11:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:18:19.997 15:11:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:18:19.997 15:11:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:18:19.997 15:11:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:19.997 15:11:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:19.997 15:11:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:18:19.997 15:11:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:19.997 15:11:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:19.997 15:11:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:19.997 15:11:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:19.997 15:11:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:20.256 15:11:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:20.256 15:11:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:20.256 15:11:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:20.256 15:11:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:20.256 15:11:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:20.256 15:11:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:18:20.256 15:11:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:18:20.256 15:11:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:18:20.256 15:11:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:18:20.256 15:11:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:18:20.256 15:11:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:18:20.256 15:11:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:18:20.256 15:11:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:18:20.256 15:11:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:20.256 15:11:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:18:20.256 15:11:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:18:20.256 15:11:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:18:20.256 15:11:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:18:20.256 15:11:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:18:20.256 15:11:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:20.256 15:11:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:18:20.256 15:11:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:20.256 15:11:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:18:20.256 15:11:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:20.256 15:11:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:18:20.256 15:11:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:20.256 15:11:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:20.256 15:11:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:18:20.514 /dev/nbd0 00:18:20.514 15:11:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:20.514 15:11:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:20.514 15:11:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:20.514 15:11:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:20.514 15:11:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:20.514 15:11:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:20.514 15:11:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:20.514 15:11:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:20.514 15:11:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:20.514 15:11:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:20.514 15:11:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:20.514 1+0 records in 00:18:20.514 1+0 records out 00:18:20.514 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000577829 s, 7.1 MB/s 00:18:20.514 15:11:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:20.514 15:11:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:20.514 15:11:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:20.514 15:11:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:20.514 15:11:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:20.514 15:11:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:20.514 15:11:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:20.514 15:11:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:18:20.772 /dev/nbd1 00:18:20.772 15:11:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:20.772 15:11:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:20.772 15:11:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:20.772 15:11:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:20.772 15:11:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:20.772 15:11:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:20.772 15:11:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:20.772 15:11:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:20.772 15:11:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:20.772 15:11:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:20.772 15:11:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:20.772 1+0 records in 00:18:20.772 1+0 records out 00:18:20.772 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000687261 s, 6.0 MB/s 00:18:20.772 15:11:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:20.772 15:11:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:20.772 15:11:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:20.772 15:11:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:20.772 15:11:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:20.772 15:11:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:20.772 15:11:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:20.772 15:11:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 /dev/nbd10 00:18:21.030 /dev/nbd10 00:18:21.030 15:11:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:18:21.030 15:11:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:18:21.030 15:11:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:18:21.030 15:11:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:21.030 15:11:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:21.030 15:11:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:21.030 15:11:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:18:21.030 15:11:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:21.030 15:11:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:21.030 15:11:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:21.030 15:11:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:21.030 1+0 records in 00:18:21.030 1+0 records out 00:18:21.030 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000802914 s, 5.1 MB/s 00:18:21.030 15:11:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:21.030 15:11:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:21.030 15:11:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:21.030 15:11:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:21.030 15:11:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:21.030 15:11:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:21.030 15:11:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:21.030 15:11:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 /dev/nbd11 00:18:21.287 /dev/nbd11 00:18:21.287 15:11:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:18:21.287 15:11:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:18:21.288 15:11:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:18:21.288 15:11:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:21.288 15:11:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:21.288 15:11:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:21.288 15:11:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:18:21.288 15:11:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:21.288 15:11:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:21.288 15:11:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:21.288 15:11:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:21.288 1+0 records in 00:18:21.288 1+0 records out 00:18:21.288 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000751496 s, 5.5 MB/s 00:18:21.288 15:11:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:21.288 15:11:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:21.288 15:11:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:21.288 15:11:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:21.288 15:11:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:21.288 15:11:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:21.288 15:11:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:21.288 15:11:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:18:21.546 /dev/nbd12 00:18:21.546 15:11:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:18:21.546 15:11:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:18:21.546 15:11:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:18:21.546 15:11:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:21.546 15:11:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:21.546 15:11:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:21.546 15:11:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:18:21.546 15:11:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:21.546 15:11:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:21.546 15:11:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:21.546 15:11:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:21.546 1+0 records in 00:18:21.546 1+0 records out 00:18:21.546 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000796751 s, 5.1 MB/s 00:18:21.546 15:11:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:21.806 15:11:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:21.806 15:11:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:21.806 15:11:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:21.806 15:11:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:21.806 15:11:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:21.806 15:11:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:21.806 15:11:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:18:21.806 /dev/nbd13 00:18:21.806 15:11:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:18:21.806 15:11:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:18:21.806 15:11:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:18:21.806 15:11:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:21.806 15:11:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:21.806 15:11:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:21.806 15:11:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:18:22.064 15:11:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:22.064 15:11:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:22.064 15:11:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:22.064 15:11:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:22.064 1+0 records in 00:18:22.064 1+0 records out 00:18:22.064 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00103418 s, 4.0 MB/s 00:18:22.064 15:11:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:22.064 15:11:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:22.064 15:11:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:22.064 15:11:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:22.064 15:11:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:22.064 15:11:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:22.064 15:11:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:22.064 15:11:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:22.064 15:11:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:22.064 15:11:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:22.322 15:11:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:18:22.322 { 00:18:22.322 "nbd_device": "/dev/nbd0", 00:18:22.322 "bdev_name": "nvme0n1" 00:18:22.322 }, 00:18:22.322 { 00:18:22.322 "nbd_device": "/dev/nbd1", 00:18:22.322 "bdev_name": "nvme1n1" 00:18:22.322 }, 00:18:22.322 { 00:18:22.322 "nbd_device": "/dev/nbd10", 00:18:22.322 "bdev_name": "nvme1n2" 00:18:22.322 }, 00:18:22.322 { 00:18:22.322 "nbd_device": "/dev/nbd11", 00:18:22.322 "bdev_name": "nvme1n3" 00:18:22.322 }, 00:18:22.322 { 00:18:22.322 "nbd_device": "/dev/nbd12", 00:18:22.322 "bdev_name": "nvme2n1" 00:18:22.322 }, 00:18:22.322 { 00:18:22.322 "nbd_device": "/dev/nbd13", 00:18:22.322 "bdev_name": "nvme3n1" 00:18:22.322 } 00:18:22.322 ]' 00:18:22.322 15:11:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:18:22.322 { 00:18:22.322 "nbd_device": "/dev/nbd0", 00:18:22.322 "bdev_name": "nvme0n1" 00:18:22.322 }, 00:18:22.322 { 00:18:22.322 "nbd_device": "/dev/nbd1", 00:18:22.322 "bdev_name": "nvme1n1" 00:18:22.322 }, 00:18:22.322 { 00:18:22.322 "nbd_device": "/dev/nbd10", 00:18:22.322 "bdev_name": "nvme1n2" 00:18:22.322 }, 00:18:22.322 { 00:18:22.322 "nbd_device": "/dev/nbd11", 00:18:22.322 "bdev_name": "nvme1n3" 00:18:22.322 }, 00:18:22.322 { 00:18:22.322 "nbd_device": "/dev/nbd12", 00:18:22.322 "bdev_name": "nvme2n1" 00:18:22.322 }, 00:18:22.322 { 00:18:22.322 "nbd_device": "/dev/nbd13", 00:18:22.322 "bdev_name": "nvme3n1" 00:18:22.322 } 00:18:22.322 ]' 00:18:22.322 15:11:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:22.322 15:11:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:18:22.322 /dev/nbd1 00:18:22.322 /dev/nbd10 00:18:22.322 /dev/nbd11 00:18:22.322 /dev/nbd12 00:18:22.322 /dev/nbd13' 00:18:22.322 15:11:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:18:22.322 /dev/nbd1 00:18:22.322 /dev/nbd10 00:18:22.322 /dev/nbd11 00:18:22.322 /dev/nbd12 00:18:22.322 /dev/nbd13' 00:18:22.322 15:11:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:22.322 15:11:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:18:22.322 15:11:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:18:22.322 15:11:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:18:22.322 15:11:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:18:22.322 15:11:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:18:22.322 15:11:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:18:22.322 15:11:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:22.322 15:11:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:18:22.322 15:11:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:22.322 15:11:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:18:22.322 15:11:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:18:22.323 256+0 records in 00:18:22.323 256+0 records out 00:18:22.323 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.013157 s, 79.7 MB/s 00:18:22.323 15:11:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:22.323 15:11:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:18:22.323 256+0 records in 00:18:22.323 256+0 records out 00:18:22.323 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.118664 s, 8.8 MB/s 00:18:22.323 15:11:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:22.323 15:11:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:18:22.580 256+0 records in 00:18:22.581 256+0 records out 00:18:22.581 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.119459 s, 8.8 MB/s 00:18:22.581 15:11:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:22.581 15:11:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:18:22.581 256+0 records in 00:18:22.581 256+0 records out 00:18:22.581 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.123586 s, 8.5 MB/s 00:18:22.581 15:11:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:22.581 15:11:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:18:22.839 256+0 records in 00:18:22.839 256+0 records out 00:18:22.839 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.121804 s, 8.6 MB/s 00:18:22.839 15:11:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:22.839 15:11:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:18:22.839 256+0 records in 00:18:22.839 256+0 records out 00:18:22.839 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.119893 s, 8.7 MB/s 00:18:22.839 15:11:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:22.839 15:11:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:18:23.098 256+0 records in 00:18:23.098 256+0 records out 00:18:23.098 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.14521 s, 7.2 MB/s 00:18:23.098 15:11:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:18:23.098 15:11:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:18:23.098 15:11:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:23.098 15:11:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:18:23.098 15:11:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:23.098 15:11:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:18:23.098 15:11:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:18:23.098 15:11:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:23.098 15:11:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:18:23.098 15:11:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:23.098 15:11:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:18:23.098 15:11:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:23.098 15:11:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:18:23.098 15:11:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:23.098 15:11:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:18:23.098 15:11:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:23.098 15:11:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:18:23.098 15:11:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:23.098 15:11:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:18:23.098 15:11:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:23.098 15:11:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:18:23.098 15:11:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:23.098 15:11:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:18:23.098 15:11:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:23.098 15:11:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:23.098 15:11:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:23.098 15:11:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:23.387 15:11:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:23.387 15:11:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:23.387 15:11:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:23.387 15:11:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:23.387 15:11:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:23.387 15:11:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:23.387 15:11:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:23.387 15:11:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:23.387 15:11:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:23.387 15:11:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:18:23.645 15:11:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:23.645 15:11:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:23.645 15:11:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:23.645 15:11:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:23.645 15:11:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:23.645 15:11:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:23.645 15:11:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:23.645 15:11:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:23.645 15:11:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:23.645 15:11:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:18:23.903 15:11:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:18:23.903 15:11:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:18:23.903 15:11:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:18:23.903 15:11:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:23.903 15:11:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:23.903 15:11:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:18:23.903 15:11:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:23.903 15:11:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:23.903 15:11:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:23.903 15:11:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:18:24.161 15:11:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:18:24.161 15:11:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:18:24.161 15:11:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:18:24.161 15:11:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:24.161 15:11:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:24.161 15:11:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:18:24.161 15:11:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:24.161 15:11:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:24.161 15:11:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:24.162 15:11:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:18:24.421 15:11:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:18:24.421 15:11:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:18:24.421 15:11:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:18:24.421 15:11:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:24.421 15:11:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:24.421 15:11:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:18:24.421 15:11:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:24.421 15:11:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:24.421 15:11:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:24.421 15:11:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:18:24.421 15:11:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:18:24.421 15:11:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:18:24.421 15:11:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:18:24.421 15:11:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:24.421 15:11:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:24.421 15:11:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:18:24.421 15:11:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:24.421 15:11:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:24.421 15:11:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:24.421 15:11:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:24.421 15:11:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:24.679 15:11:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:24.679 15:11:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:24.679 15:11:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:24.679 15:11:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:24.679 15:11:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:18:24.679 15:11:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:24.679 15:11:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:18:24.679 15:11:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:18:24.679 15:11:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:18:24.679 15:11:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:18:24.679 15:11:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:18:24.679 15:11:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:18:24.679 15:11:25 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:24.679 15:11:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:24.679 15:11:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:18:24.679 15:11:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:18:24.938 malloc_lvol_verify 00:18:24.938 15:11:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:18:25.196 cb480e89-62e1-4bd0-acbe-b15bb6c3dbc8 00:18:25.196 15:11:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:18:25.454 be02da49-57fb-46b2-8d8c-cca3ec78d8eb 00:18:25.454 15:11:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:18:25.712 /dev/nbd0 00:18:25.712 15:11:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:18:25.712 15:11:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:18:25.712 15:11:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:18:25.712 15:11:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:18:25.712 15:11:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:18:25.713 mke2fs 1.47.0 (5-Feb-2023) 00:18:25.713 Discarding device blocks: 0/4096 done 00:18:25.713 Creating filesystem with 4096 1k blocks and 1024 inodes 00:18:25.713 00:18:25.713 Allocating group tables: 0/1 done 00:18:25.713 Writing inode tables: 0/1 done 00:18:25.713 Creating journal (1024 blocks): done 00:18:25.713 Writing superblocks and filesystem accounting information: 0/1 done 00:18:25.713 00:18:25.713 15:11:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:25.713 15:11:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:25.713 15:11:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:25.713 15:11:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:25.713 15:11:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:25.713 15:11:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:25.713 15:11:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:25.989 15:11:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:25.989 15:11:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:25.989 15:11:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:25.989 15:11:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:25.989 15:11:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:25.989 15:11:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:25.989 15:11:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:25.989 15:11:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:25.989 15:11:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 74246 00:18:25.989 15:11:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 74246 ']' 00:18:25.989 15:11:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 74246 00:18:25.989 15:11:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:18:25.989 15:11:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:25.989 15:11:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74246 00:18:25.989 15:11:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:25.989 15:11:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:25.989 15:11:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74246' 00:18:25.989 15:11:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 74246 00:18:25.989 15:11:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 74246 00:18:25.989 killing process with pid 74246 00:18:27.367 15:11:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:18:27.367 00:18:27.367 real 0m11.803s 00:18:27.367 user 0m15.037s 00:18:27.367 sys 0m5.273s 00:18:27.367 15:11:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:27.367 ************************************ 00:18:27.367 END TEST bdev_nbd 00:18:27.367 15:11:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:27.367 ************************************ 00:18:27.367 15:11:28 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:18:27.367 15:11:28 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:18:27.367 15:11:28 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:18:27.367 15:11:28 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:18:27.367 15:11:28 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:27.367 15:11:28 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:27.367 15:11:28 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:27.367 ************************************ 00:18:27.367 START TEST bdev_fio 00:18:27.367 ************************************ 00:18:27.367 15:11:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:18:27.367 15:11:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:18:27.367 15:11:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:18:27.367 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:18:27.367 15:11:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:18:27.367 15:11:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:18:27.367 15:11:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:18:27.367 15:11:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:18:27.367 15:11:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:18:27.367 15:11:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:27.367 15:11:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:18:27.367 15:11:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:18:27.367 15:11:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:18:27.368 15:11:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:18:27.368 15:11:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:18:27.368 15:11:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:18:27.368 15:11:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:18:27.368 15:11:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:27.368 15:11:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:18:27.368 15:11:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:18:27.368 15:11:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:18:27.368 15:11:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:18:27.368 15:11:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:18:27.368 15:11:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:18:27.368 15:11:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:18:27.368 15:11:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:27.368 15:11:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:18:27.368 15:11:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:18:27.368 15:11:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:27.368 15:11:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:18:27.368 15:11:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:18:27.368 15:11:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:27.368 15:11:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n2]' 00:18:27.368 15:11:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n2 00:18:27.368 15:11:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:27.368 15:11:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n3]' 00:18:27.368 15:11:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n3 00:18:27.368 15:11:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:27.368 15:11:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:18:27.368 15:11:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:18:27.368 15:11:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:27.368 15:11:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:18:27.368 15:11:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:18:27.368 15:11:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:18:27.368 15:11:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:27.368 15:11:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:18:27.368 15:11:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:27.368 15:11:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:18:27.368 ************************************ 00:18:27.368 START TEST bdev_fio_rw_verify 00:18:27.368 ************************************ 00:18:27.368 15:11:28 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:27.368 15:11:28 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:27.368 15:11:28 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:27.368 15:11:28 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:27.368 15:11:28 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:27.368 15:11:28 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:27.368 15:11:28 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:18:27.368 15:11:28 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:27.368 15:11:28 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:27.368 15:11:28 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:27.368 15:11:28 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:27.368 15:11:28 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:18:27.368 15:11:28 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:27.368 15:11:28 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:27.368 15:11:28 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:18:27.368 15:11:28 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:27.368 15:11:28 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:27.627 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:27.627 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:27.627 job_nvme1n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:27.627 job_nvme1n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:27.627 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:27.627 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:27.627 fio-3.35 00:18:27.627 Starting 6 threads 00:18:39.850 00:18:39.850 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=74667: Wed Nov 20 15:11:39 2024 00:18:39.850 read: IOPS=33.7k, BW=132MiB/s (138MB/s)(1318MiB/10001msec) 00:18:39.850 slat (usec): min=2, max=1976, avg= 6.34, stdev= 5.63 00:18:39.850 clat (usec): min=109, max=6494, avg=568.40, stdev=182.43 00:18:39.850 lat (usec): min=115, max=6502, avg=574.74, stdev=183.15 00:18:39.850 clat percentiles (usec): 00:18:39.850 | 50.000th=[ 603], 99.000th=[ 1004], 99.900th=[ 1811], 99.990th=[ 4817], 00:18:39.850 | 99.999th=[ 6456] 00:18:39.850 write: IOPS=34.0k, BW=133MiB/s (139MB/s)(1329MiB/10001msec); 0 zone resets 00:18:39.850 slat (usec): min=6, max=2306, avg=20.57, stdev=22.26 00:18:39.850 clat (usec): min=81, max=6354, avg=637.22, stdev=192.34 00:18:39.850 lat (usec): min=97, max=6394, avg=657.79, stdev=195.07 00:18:39.850 clat percentiles (usec): 00:18:39.850 | 50.000th=[ 644], 99.000th=[ 1221], 99.900th=[ 2073], 99.990th=[ 2900], 00:18:39.850 | 99.999th=[ 6259] 00:18:39.850 bw ( KiB/s): min=111888, max=155800, per=100.00%, avg=136789.58, stdev=2052.38, samples=114 00:18:39.850 iops : min=27972, max=38950, avg=34197.47, stdev=513.22, samples=114 00:18:39.850 lat (usec) : 100=0.01%, 250=3.62%, 500=20.03%, 750=63.80%, 1000=10.53% 00:18:39.851 lat (msec) : 2=1.94%, 4=0.08%, 10=0.01% 00:18:39.851 cpu : usr=64.42%, sys=24.16%, ctx=9269, majf=0, minf=27907 00:18:39.851 IO depths : 1=12.1%, 2=24.5%, 4=50.5%, 8=12.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:39.851 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:39.851 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:39.851 issued rwts: total=337415,340168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:39.851 latency : target=0, window=0, percentile=100.00%, depth=8 00:18:39.851 00:18:39.851 Run status group 0 (all jobs): 00:18:39.851 READ: bw=132MiB/s (138MB/s), 132MiB/s-132MiB/s (138MB/s-138MB/s), io=1318MiB (1382MB), run=10001-10001msec 00:18:39.851 WRITE: bw=133MiB/s (139MB/s), 133MiB/s-133MiB/s (139MB/s-139MB/s), io=1329MiB (1393MB), run=10001-10001msec 00:18:40.419 ----------------------------------------------------- 00:18:40.419 Suppressions used: 00:18:40.419 count bytes template 00:18:40.419 6 48 /usr/src/fio/parse.c 00:18:40.419 2520 241920 /usr/src/fio/iolog.c 00:18:40.419 1 8 libtcmalloc_minimal.so 00:18:40.419 1 904 libcrypto.so 00:18:40.419 ----------------------------------------------------- 00:18:40.419 00:18:40.419 00:18:40.419 real 0m12.866s 00:18:40.419 user 0m40.898s 00:18:40.419 sys 0m15.028s 00:18:40.419 15:11:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:40.419 ************************************ 00:18:40.419 END TEST bdev_fio_rw_verify 00:18:40.419 ************************************ 00:18:40.419 15:11:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:18:40.419 15:11:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:18:40.419 15:11:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:40.419 15:11:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:18:40.419 15:11:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:40.419 15:11:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:18:40.419 15:11:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:18:40.419 15:11:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:18:40.419 15:11:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:18:40.419 15:11:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:18:40.419 15:11:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:18:40.419 15:11:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:18:40.419 15:11:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:40.419 15:11:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:18:40.419 15:11:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:18:40.419 15:11:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:18:40.419 15:11:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:18:40.419 15:11:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:18:40.419 15:11:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "8d297c41-6edd-41a7-9922-70d56d3f1bb2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "8d297c41-6edd-41a7-9922-70d56d3f1bb2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "85ea674a-50d4-4907-a1c0-0940a820b7ab"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "85ea674a-50d4-4907-a1c0-0940a820b7ab",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "78c2e1d2-307a-447a-9556-c553d37edac9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "78c2e1d2-307a-447a-9556-c553d37edac9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "a3952c21-5851-4f19-a150-fa8c851e70c9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a3952c21-5851-4f19-a150-fa8c851e70c9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "2545cb6a-41cd-49a1-bb63-b8551befaa75"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "2545cb6a-41cd-49a1-bb63-b8551befaa75",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "c12b3067-b0f1-457c-bcea-cb6ad90f73de"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "c12b3067-b0f1-457c-bcea-cb6ad90f73de",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:18:40.419 15:11:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:18:40.419 15:11:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:40.419 /home/vagrant/spdk_repo/spdk 00:18:40.419 15:11:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:18:40.419 15:11:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:18:40.419 15:11:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:18:40.419 00:18:40.419 real 0m13.092s 00:18:40.419 user 0m41.013s 00:18:40.419 sys 0m15.146s 00:18:40.419 15:11:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:40.419 15:11:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:18:40.419 ************************************ 00:18:40.419 END TEST bdev_fio 00:18:40.419 ************************************ 00:18:40.419 15:11:41 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:40.419 15:11:41 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:18:40.419 15:11:41 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:18:40.419 15:11:41 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:40.419 15:11:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:40.419 ************************************ 00:18:40.419 START TEST bdev_verify 00:18:40.419 ************************************ 00:18:40.419 15:11:41 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:18:40.698 [2024-11-20 15:11:41.311479] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:18:40.698 [2024-11-20 15:11:41.311633] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74839 ] 00:18:40.698 [2024-11-20 15:11:41.500432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:40.958 [2024-11-20 15:11:41.654482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:40.958 [2024-11-20 15:11:41.654515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:41.526 Running I/O for 5 seconds... 00:18:43.864 26592.00 IOPS, 103.88 MiB/s [2024-11-20T15:11:45.637Z] 24720.00 IOPS, 96.56 MiB/s [2024-11-20T15:11:46.569Z] 24629.33 IOPS, 96.21 MiB/s [2024-11-20T15:11:47.507Z] 24456.00 IOPS, 95.53 MiB/s [2024-11-20T15:11:47.507Z] 23891.20 IOPS, 93.33 MiB/s 00:18:46.671 Latency(us) 00:18:46.671 [2024-11-20T15:11:47.507Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.671 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:46.671 Verification LBA range: start 0x0 length 0x20000 00:18:46.671 nvme0n1 : 5.03 1831.53 7.15 0.00 0.00 69769.40 10159.40 63167.23 00:18:46.671 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:46.671 Verification LBA range: start 0x20000 length 0x20000 00:18:46.671 nvme0n1 : 5.08 1788.48 6.99 0.00 0.00 71446.64 13580.95 74116.22 00:18:46.671 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:46.671 Verification LBA range: start 0x0 length 0x80000 00:18:46.671 nvme1n1 : 5.08 1814.96 7.09 0.00 0.00 70286.81 11843.86 66957.26 00:18:46.671 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:46.671 Verification LBA range: start 0x80000 length 0x80000 00:18:46.671 nvme1n1 : 5.04 1778.51 6.95 0.00 0.00 71730.92 18002.66 66536.15 00:18:46.671 Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:46.671 Verification LBA range: start 0x0 length 0x80000 00:18:46.671 nvme1n2 : 5.04 1804.40 7.05 0.00 0.00 70585.79 18529.05 64851.69 00:18:46.671 Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:46.671 Verification LBA range: start 0x80000 length 0x80000 00:18:46.671 nvme1n2 : 5.04 1777.93 6.95 0.00 0.00 71635.09 15370.69 71168.41 00:18:46.671 Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:46.671 Verification LBA range: start 0x0 length 0x80000 00:18:46.671 nvme1n3 : 5.08 1813.52 7.08 0.00 0.00 70125.83 7790.62 67378.38 00:18:46.671 Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:46.671 Verification LBA range: start 0x80000 length 0x80000 00:18:46.671 nvme1n3 : 5.09 1786.42 6.98 0.00 0.00 71178.80 17581.55 62746.11 00:18:46.671 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:46.671 Verification LBA range: start 0x0 length 0xa0000 00:18:46.671 nvme2n1 : 5.08 1812.46 7.08 0.00 0.00 70059.63 9685.64 69905.07 00:18:46.671 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:46.671 Verification LBA range: start 0xa0000 length 0xa0000 00:18:46.671 nvme2n1 : 5.09 1785.20 6.97 0.00 0.00 71107.67 11054.27 79590.71 00:18:46.671 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:46.671 Verification LBA range: start 0x0 length 0xbd0bd 00:18:46.671 nvme3n1 : 5.06 2874.86 11.23 0.00 0.00 44064.18 4790.18 57271.62 00:18:46.671 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:46.671 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:18:46.671 nvme3n1 : 5.08 2760.13 10.78 0.00 0.00 45893.11 5500.81 56008.28 00:18:46.671 [2024-11-20T15:11:47.507Z] =================================================================================================================== 00:18:46.671 [2024-11-20T15:11:47.507Z] Total : 23628.40 92.30 0.00 0.00 64621.50 4790.18 79590.71 00:18:48.048 00:18:48.048 real 0m7.415s 00:18:48.048 user 0m11.143s 00:18:48.048 sys 0m2.325s 00:18:48.048 15:11:48 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:48.048 15:11:48 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:18:48.048 ************************************ 00:18:48.048 END TEST bdev_verify 00:18:48.048 ************************************ 00:18:48.048 15:11:48 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:18:48.048 15:11:48 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:18:48.048 15:11:48 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:48.048 15:11:48 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:48.048 ************************************ 00:18:48.048 START TEST bdev_verify_big_io 00:18:48.048 ************************************ 00:18:48.048 15:11:48 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:18:48.048 [2024-11-20 15:11:48.806240] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:18:48.048 [2024-11-20 15:11:48.806407] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74943 ] 00:18:48.307 [2024-11-20 15:11:48.999411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:48.565 [2024-11-20 15:11:49.150817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:48.565 [2024-11-20 15:11:49.150851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:49.132 Running I/O for 5 seconds... 00:18:55.004 1888.00 IOPS, 118.00 MiB/s [2024-11-20T15:11:55.840Z] 3312.00 IOPS, 207.00 MiB/s [2024-11-20T15:11:55.840Z] 3600.00 IOPS, 225.00 MiB/s 00:18:55.004 Latency(us) 00:18:55.004 [2024-11-20T15:11:55.840Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.004 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:55.004 Verification LBA range: start 0x0 length 0x2000 00:18:55.004 nvme0n1 : 5.76 147.29 9.21 0.00 0.00 842148.40 17160.43 1037627.01 00:18:55.004 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:55.004 Verification LBA range: start 0x2000 length 0x2000 00:18:55.004 nvme0n1 : 5.65 155.82 9.74 0.00 0.00 794701.64 13896.79 869181.07 00:18:55.004 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:55.004 Verification LBA range: start 0x0 length 0x8000 00:18:55.004 nvme1n1 : 5.56 143.80 8.99 0.00 0.00 841158.51 181079.39 1354305.39 00:18:55.004 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:55.004 Verification LBA range: start 0x8000 length 0x8000 00:18:55.004 nvme1n1 : 5.65 164.25 10.27 0.00 0.00 733231.76 15686.53 936559.45 00:18:55.004 Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:55.004 Verification LBA range: start 0x0 length 0x8000 00:18:55.004 nvme1n2 : 5.76 166.66 10.42 0.00 0.00 707795.38 111174.32 1084791.88 00:18:55.004 Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:55.004 Verification LBA range: start 0x8000 length 0x8000 00:18:55.004 nvme1n2 : 5.65 147.18 9.20 0.00 0.00 800998.30 48007.09 1017413.50 00:18:55.004 Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:55.004 Verification LBA range: start 0x0 length 0x8000 00:18:55.004 nvme1n3 : 5.65 152.86 9.55 0.00 0.00 756622.65 44217.06 828754.04 00:18:55.004 Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:55.004 Verification LBA range: start 0x8000 length 0x8000 00:18:55.004 nvme1n3 : 5.66 178.03 11.13 0.00 0.00 648688.48 22845.48 1091529.72 00:18:55.004 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:55.004 Verification LBA range: start 0x0 length 0xa000 00:18:55.004 nvme2n1 : 5.78 130.20 8.14 0.00 0.00 873760.76 11370.10 1873118.89 00:18:55.004 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:55.004 Verification LBA range: start 0xa000 length 0xa000 00:18:55.004 nvme2n1 : 5.79 143.68 8.98 0.00 0.00 777402.33 93908.61 1172383.77 00:18:55.004 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:55.004 Verification LBA range: start 0x0 length 0xbd0b 00:18:55.004 nvme3n1 : 5.79 198.88 12.43 0.00 0.00 556241.37 12422.89 781589.18 00:18:55.004 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:55.004 Verification LBA range: start 0xbd0b length 0xbd0b 00:18:55.004 nvme3n1 : 5.80 173.70 10.86 0.00 0.00 631386.53 7790.62 1320616.20 00:18:55.004 [2024-11-20T15:11:55.840Z] =================================================================================================================== 00:18:55.004 [2024-11-20T15:11:55.840Z] Total : 1902.35 118.90 0.00 0.00 736510.84 7790.62 1873118.89 00:18:56.914 00:18:56.914 real 0m8.546s 00:18:56.914 user 0m15.322s 00:18:56.914 sys 0m0.722s 00:18:56.914 15:11:57 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:56.914 15:11:57 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:18:56.914 ************************************ 00:18:56.914 END TEST bdev_verify_big_io 00:18:56.914 ************************************ 00:18:56.914 15:11:57 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:56.914 15:11:57 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:18:56.914 15:11:57 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:56.914 15:11:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:56.914 ************************************ 00:18:56.914 START TEST bdev_write_zeroes 00:18:56.914 ************************************ 00:18:56.914 15:11:57 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:56.914 [2024-11-20 15:11:57.432074] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:18:56.914 [2024-11-20 15:11:57.432233] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75053 ] 00:18:56.914 [2024-11-20 15:11:57.619809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.174 [2024-11-20 15:11:57.773952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:57.769 Running I/O for 1 seconds... 00:18:58.708 57184.00 IOPS, 223.38 MiB/s 00:18:58.708 Latency(us) 00:18:58.708 [2024-11-20T15:11:59.544Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:58.708 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:58.708 nvme0n1 : 1.04 9122.10 35.63 0.00 0.00 14017.45 7685.35 27583.02 00:18:58.708 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:58.708 nvme1n1 : 1.03 9059.91 35.39 0.00 0.00 14103.32 7737.99 34110.30 00:18:58.708 Job: nvme1n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:58.708 nvme1n2 : 1.03 9048.33 35.35 0.00 0.00 14111.95 7737.99 33899.75 00:18:58.708 Job: nvme1n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:58.708 nvme1n3 : 1.03 9037.03 35.30 0.00 0.00 14119.81 7895.90 33899.75 00:18:58.708 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:58.708 nvme2n1 : 1.04 9027.25 35.26 0.00 0.00 14125.85 8001.18 33478.63 00:18:58.708 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:58.708 nvme3n1 : 1.04 10895.61 42.56 0.00 0.00 11693.92 4000.59 34741.98 00:18:58.708 [2024-11-20T15:11:59.544Z] =================================================================================================================== 00:18:58.708 [2024-11-20T15:11:59.544Z] Total : 56190.23 219.49 0.00 0.00 13626.77 4000.59 34741.98 00:19:00.088 00:19:00.088 real 0m3.331s 00:19:00.088 user 0m2.433s 00:19:00.088 sys 0m0.700s 00:19:00.088 ************************************ 00:19:00.088 END TEST bdev_write_zeroes 00:19:00.088 ************************************ 00:19:00.088 15:12:00 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:00.088 15:12:00 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:19:00.088 15:12:00 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:00.088 15:12:00 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:00.088 15:12:00 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:00.088 15:12:00 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:00.088 ************************************ 00:19:00.088 START TEST bdev_json_nonenclosed 00:19:00.088 ************************************ 00:19:00.088 15:12:00 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:00.088 [2024-11-20 15:12:00.834630] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:19:00.088 [2024-11-20 15:12:00.835000] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75112 ] 00:19:00.347 [2024-11-20 15:12:01.024060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.606 [2024-11-20 15:12:01.181453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.606 [2024-11-20 15:12:01.181603] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:19:00.606 [2024-11-20 15:12:01.181630] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:00.606 [2024-11-20 15:12:01.181644] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:00.866 00:19:00.866 real 0m0.748s 00:19:00.866 user 0m0.472s 00:19:00.866 sys 0m0.169s 00:19:00.866 15:12:01 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:00.866 15:12:01 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:19:00.866 ************************************ 00:19:00.866 END TEST bdev_json_nonenclosed 00:19:00.866 ************************************ 00:19:00.866 15:12:01 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:00.866 15:12:01 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:00.866 15:12:01 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:00.866 15:12:01 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:00.866 ************************************ 00:19:00.866 START TEST bdev_json_nonarray 00:19:00.866 ************************************ 00:19:00.866 15:12:01 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:00.866 [2024-11-20 15:12:01.664805] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:19:00.866 [2024-11-20 15:12:01.664962] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75143 ] 00:19:01.124 [2024-11-20 15:12:01.849628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.382 [2024-11-20 15:12:02.003604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:01.382 [2024-11-20 15:12:02.003771] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:19:01.382 [2024-11-20 15:12:02.003800] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:01.382 [2024-11-20 15:12:02.003815] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:01.700 ************************************ 00:19:01.700 END TEST bdev_json_nonarray 00:19:01.700 ************************************ 00:19:01.700 00:19:01.700 real 0m0.744s 00:19:01.700 user 0m0.475s 00:19:01.700 sys 0m0.163s 00:19:01.700 15:12:02 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:01.700 15:12:02 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:19:01.700 15:12:02 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:19:01.700 15:12:02 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:19:01.700 15:12:02 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:19:01.700 15:12:02 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:19:01.700 15:12:02 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:19:01.700 15:12:02 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:19:01.700 15:12:02 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:01.700 15:12:02 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:19:01.700 15:12:02 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:19:01.700 15:12:02 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:19:01.700 15:12:02 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:19:01.700 15:12:02 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:02.282 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:07.556 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:07.556 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:19:07.556 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:07.556 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:19:07.556 00:19:07.556 real 1m3.381s 00:19:07.556 user 1m42.543s 00:19:07.556 sys 0m40.031s 00:19:07.556 15:12:07 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:07.556 15:12:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:07.556 ************************************ 00:19:07.556 END TEST blockdev_xnvme 00:19:07.556 ************************************ 00:19:07.556 15:12:08 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:19:07.556 15:12:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:07.556 15:12:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:07.556 15:12:08 -- common/autotest_common.sh@10 -- # set +x 00:19:07.556 ************************************ 00:19:07.556 START TEST ublk 00:19:07.556 ************************************ 00:19:07.556 15:12:08 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:19:07.556 * Looking for test storage... 00:19:07.556 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:19:07.556 15:12:08 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:07.556 15:12:08 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:19:07.556 15:12:08 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:07.556 15:12:08 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:07.556 15:12:08 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:07.556 15:12:08 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:07.556 15:12:08 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:07.556 15:12:08 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:19:07.556 15:12:08 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:19:07.556 15:12:08 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:19:07.556 15:12:08 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:19:07.556 15:12:08 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:19:07.556 15:12:08 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:19:07.556 15:12:08 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:19:07.556 15:12:08 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:07.556 15:12:08 ublk -- scripts/common.sh@344 -- # case "$op" in 00:19:07.556 15:12:08 ublk -- scripts/common.sh@345 -- # : 1 00:19:07.556 15:12:08 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:07.556 15:12:08 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:07.556 15:12:08 ublk -- scripts/common.sh@365 -- # decimal 1 00:19:07.556 15:12:08 ublk -- scripts/common.sh@353 -- # local d=1 00:19:07.556 15:12:08 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:07.556 15:12:08 ublk -- scripts/common.sh@355 -- # echo 1 00:19:07.556 15:12:08 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:19:07.556 15:12:08 ublk -- scripts/common.sh@366 -- # decimal 2 00:19:07.556 15:12:08 ublk -- scripts/common.sh@353 -- # local d=2 00:19:07.556 15:12:08 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:07.556 15:12:08 ublk -- scripts/common.sh@355 -- # echo 2 00:19:07.556 15:12:08 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:19:07.556 15:12:08 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:07.556 15:12:08 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:07.557 15:12:08 ublk -- scripts/common.sh@368 -- # return 0 00:19:07.557 15:12:08 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:07.557 15:12:08 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:07.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.557 --rc genhtml_branch_coverage=1 00:19:07.557 --rc genhtml_function_coverage=1 00:19:07.557 --rc genhtml_legend=1 00:19:07.557 --rc geninfo_all_blocks=1 00:19:07.557 --rc geninfo_unexecuted_blocks=1 00:19:07.557 00:19:07.557 ' 00:19:07.557 15:12:08 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:07.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.557 --rc genhtml_branch_coverage=1 00:19:07.557 --rc genhtml_function_coverage=1 00:19:07.557 --rc genhtml_legend=1 00:19:07.557 --rc geninfo_all_blocks=1 00:19:07.557 --rc geninfo_unexecuted_blocks=1 00:19:07.557 00:19:07.557 ' 00:19:07.557 15:12:08 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:07.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.557 --rc genhtml_branch_coverage=1 00:19:07.557 --rc genhtml_function_coverage=1 00:19:07.557 --rc genhtml_legend=1 00:19:07.557 --rc geninfo_all_blocks=1 00:19:07.557 --rc geninfo_unexecuted_blocks=1 00:19:07.557 00:19:07.557 ' 00:19:07.557 15:12:08 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:07.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.557 --rc genhtml_branch_coverage=1 00:19:07.557 --rc genhtml_function_coverage=1 00:19:07.557 --rc genhtml_legend=1 00:19:07.557 --rc geninfo_all_blocks=1 00:19:07.557 --rc geninfo_unexecuted_blocks=1 00:19:07.557 00:19:07.557 ' 00:19:07.557 15:12:08 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:19:07.557 15:12:08 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:19:07.557 15:12:08 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:19:07.557 15:12:08 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:19:07.557 15:12:08 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:19:07.557 15:12:08 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:19:07.557 15:12:08 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:19:07.557 15:12:08 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:19:07.557 15:12:08 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:19:07.557 15:12:08 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:19:07.557 15:12:08 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:19:07.557 15:12:08 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:19:07.557 15:12:08 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:19:07.557 15:12:08 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:19:07.557 15:12:08 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:19:07.557 15:12:08 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:19:07.557 15:12:08 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:19:07.557 15:12:08 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:19:07.557 15:12:08 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:19:07.557 15:12:08 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:19:07.557 15:12:08 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:07.557 15:12:08 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:07.557 15:12:08 ublk -- common/autotest_common.sh@10 -- # set +x 00:19:07.557 ************************************ 00:19:07.557 START TEST test_save_ublk_config 00:19:07.557 ************************************ 00:19:07.557 15:12:08 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:19:07.557 15:12:08 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:19:07.557 15:12:08 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=75444 00:19:07.557 15:12:08 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:19:07.557 15:12:08 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:19:07.557 15:12:08 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 75444 00:19:07.557 15:12:08 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75444 ']' 00:19:07.557 15:12:08 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:07.557 15:12:08 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:07.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:07.557 15:12:08 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:07.557 15:12:08 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:07.557 15:12:08 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:19:07.816 [2024-11-20 15:12:08.465529] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:19:07.816 [2024-11-20 15:12:08.465939] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75444 ] 00:19:08.074 [2024-11-20 15:12:08.655212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.074 [2024-11-20 15:12:08.801070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:09.453 15:12:09 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:09.453 15:12:09 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:19:09.454 15:12:09 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:19:09.454 15:12:09 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:19:09.454 15:12:09 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.454 15:12:09 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:19:09.454 [2024-11-20 15:12:09.869766] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:19:09.454 [2024-11-20 15:12:09.871105] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:19:09.454 malloc0 00:19:09.454 [2024-11-20 15:12:09.964906] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:19:09.454 [2024-11-20 15:12:09.965040] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:19:09.454 [2024-11-20 15:12:09.965055] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:19:09.454 [2024-11-20 15:12:09.965064] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:19:09.454 [2024-11-20 15:12:09.972770] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:09.454 [2024-11-20 15:12:09.972797] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:09.454 [2024-11-20 15:12:09.980750] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:09.454 [2024-11-20 15:12:09.980863] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:19:09.454 [2024-11-20 15:12:10.004780] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:19:09.454 0 00:19:09.454 15:12:10 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.454 15:12:10 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:19:09.454 15:12:10 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.454 15:12:10 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:19:09.713 15:12:10 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.713 15:12:10 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:19:09.713 "subsystems": [ 00:19:09.713 { 00:19:09.713 "subsystem": "fsdev", 00:19:09.713 "config": [ 00:19:09.713 { 00:19:09.713 "method": "fsdev_set_opts", 00:19:09.713 "params": { 00:19:09.713 "fsdev_io_pool_size": 65535, 00:19:09.713 "fsdev_io_cache_size": 256 00:19:09.713 } 00:19:09.713 } 00:19:09.713 ] 00:19:09.713 }, 00:19:09.713 { 00:19:09.713 "subsystem": "keyring", 00:19:09.713 "config": [] 00:19:09.713 }, 00:19:09.713 { 00:19:09.713 "subsystem": "iobuf", 00:19:09.713 "config": [ 00:19:09.713 { 00:19:09.713 "method": "iobuf_set_options", 00:19:09.713 "params": { 00:19:09.713 "small_pool_count": 8192, 00:19:09.713 "large_pool_count": 1024, 00:19:09.713 "small_bufsize": 8192, 00:19:09.713 "large_bufsize": 135168, 00:19:09.713 "enable_numa": false 00:19:09.713 } 00:19:09.713 } 00:19:09.713 ] 00:19:09.713 }, 00:19:09.713 { 00:19:09.713 "subsystem": "sock", 00:19:09.713 "config": [ 00:19:09.713 { 00:19:09.713 "method": "sock_set_default_impl", 00:19:09.713 "params": { 00:19:09.713 "impl_name": "posix" 00:19:09.713 } 00:19:09.713 }, 00:19:09.713 { 00:19:09.713 "method": "sock_impl_set_options", 00:19:09.713 "params": { 00:19:09.713 "impl_name": "ssl", 00:19:09.713 "recv_buf_size": 4096, 00:19:09.713 "send_buf_size": 4096, 00:19:09.713 "enable_recv_pipe": true, 00:19:09.713 "enable_quickack": false, 00:19:09.713 "enable_placement_id": 0, 00:19:09.713 "enable_zerocopy_send_server": true, 00:19:09.713 "enable_zerocopy_send_client": false, 00:19:09.713 "zerocopy_threshold": 0, 00:19:09.713 "tls_version": 0, 00:19:09.713 "enable_ktls": false 00:19:09.713 } 00:19:09.713 }, 00:19:09.713 { 00:19:09.713 "method": "sock_impl_set_options", 00:19:09.713 "params": { 00:19:09.713 "impl_name": "posix", 00:19:09.713 "recv_buf_size": 2097152, 00:19:09.713 "send_buf_size": 2097152, 00:19:09.713 "enable_recv_pipe": true, 00:19:09.713 "enable_quickack": false, 00:19:09.713 "enable_placement_id": 0, 00:19:09.713 "enable_zerocopy_send_server": true, 00:19:09.713 "enable_zerocopy_send_client": false, 00:19:09.713 "zerocopy_threshold": 0, 00:19:09.713 "tls_version": 0, 00:19:09.713 "enable_ktls": false 00:19:09.713 } 00:19:09.713 } 00:19:09.713 ] 00:19:09.713 }, 00:19:09.713 { 00:19:09.713 "subsystem": "vmd", 00:19:09.713 "config": [] 00:19:09.713 }, 00:19:09.713 { 00:19:09.713 "subsystem": "accel", 00:19:09.713 "config": [ 00:19:09.713 { 00:19:09.713 "method": "accel_set_options", 00:19:09.713 "params": { 00:19:09.713 "small_cache_size": 128, 00:19:09.713 "large_cache_size": 16, 00:19:09.713 "task_count": 2048, 00:19:09.713 "sequence_count": 2048, 00:19:09.713 "buf_count": 2048 00:19:09.713 } 00:19:09.713 } 00:19:09.713 ] 00:19:09.713 }, 00:19:09.713 { 00:19:09.713 "subsystem": "bdev", 00:19:09.713 "config": [ 00:19:09.713 { 00:19:09.713 "method": "bdev_set_options", 00:19:09.713 "params": { 00:19:09.713 "bdev_io_pool_size": 65535, 00:19:09.713 "bdev_io_cache_size": 256, 00:19:09.713 "bdev_auto_examine": true, 00:19:09.713 "iobuf_small_cache_size": 128, 00:19:09.713 "iobuf_large_cache_size": 16 00:19:09.713 } 00:19:09.713 }, 00:19:09.713 { 00:19:09.713 "method": "bdev_raid_set_options", 00:19:09.713 "params": { 00:19:09.713 "process_window_size_kb": 1024, 00:19:09.713 "process_max_bandwidth_mb_sec": 0 00:19:09.713 } 00:19:09.713 }, 00:19:09.713 { 00:19:09.713 "method": "bdev_iscsi_set_options", 00:19:09.713 "params": { 00:19:09.713 "timeout_sec": 30 00:19:09.713 } 00:19:09.713 }, 00:19:09.713 { 00:19:09.713 "method": "bdev_nvme_set_options", 00:19:09.713 "params": { 00:19:09.713 "action_on_timeout": "none", 00:19:09.713 "timeout_us": 0, 00:19:09.713 "timeout_admin_us": 0, 00:19:09.713 "keep_alive_timeout_ms": 10000, 00:19:09.713 "arbitration_burst": 0, 00:19:09.713 "low_priority_weight": 0, 00:19:09.713 "medium_priority_weight": 0, 00:19:09.713 "high_priority_weight": 0, 00:19:09.713 "nvme_adminq_poll_period_us": 10000, 00:19:09.713 "nvme_ioq_poll_period_us": 0, 00:19:09.713 "io_queue_requests": 0, 00:19:09.713 "delay_cmd_submit": true, 00:19:09.713 "transport_retry_count": 4, 00:19:09.713 "bdev_retry_count": 3, 00:19:09.713 "transport_ack_timeout": 0, 00:19:09.713 "ctrlr_loss_timeout_sec": 0, 00:19:09.713 "reconnect_delay_sec": 0, 00:19:09.713 "fast_io_fail_timeout_sec": 0, 00:19:09.713 "disable_auto_failback": false, 00:19:09.713 "generate_uuids": false, 00:19:09.713 "transport_tos": 0, 00:19:09.713 "nvme_error_stat": false, 00:19:09.713 "rdma_srq_size": 0, 00:19:09.713 "io_path_stat": false, 00:19:09.713 "allow_accel_sequence": false, 00:19:09.713 "rdma_max_cq_size": 0, 00:19:09.713 "rdma_cm_event_timeout_ms": 0, 00:19:09.713 "dhchap_digests": [ 00:19:09.713 "sha256", 00:19:09.713 "sha384", 00:19:09.713 "sha512" 00:19:09.713 ], 00:19:09.713 "dhchap_dhgroups": [ 00:19:09.713 "null", 00:19:09.713 "ffdhe2048", 00:19:09.713 "ffdhe3072", 00:19:09.713 "ffdhe4096", 00:19:09.713 "ffdhe6144", 00:19:09.713 "ffdhe8192" 00:19:09.713 ] 00:19:09.713 } 00:19:09.713 }, 00:19:09.713 { 00:19:09.713 "method": "bdev_nvme_set_hotplug", 00:19:09.713 "params": { 00:19:09.713 "period_us": 100000, 00:19:09.713 "enable": false 00:19:09.713 } 00:19:09.713 }, 00:19:09.713 { 00:19:09.713 "method": "bdev_malloc_create", 00:19:09.713 "params": { 00:19:09.713 "name": "malloc0", 00:19:09.713 "num_blocks": 8192, 00:19:09.713 "block_size": 4096, 00:19:09.713 "physical_block_size": 4096, 00:19:09.713 "uuid": "769e5c1f-7b96-4a02-81f9-fa5b01a14621", 00:19:09.713 "optimal_io_boundary": 0, 00:19:09.713 "md_size": 0, 00:19:09.713 "dif_type": 0, 00:19:09.713 "dif_is_head_of_md": false, 00:19:09.713 "dif_pi_format": 0 00:19:09.713 } 00:19:09.713 }, 00:19:09.713 { 00:19:09.713 "method": "bdev_wait_for_examine" 00:19:09.713 } 00:19:09.713 ] 00:19:09.713 }, 00:19:09.713 { 00:19:09.713 "subsystem": "scsi", 00:19:09.713 "config": null 00:19:09.713 }, 00:19:09.713 { 00:19:09.713 "subsystem": "scheduler", 00:19:09.713 "config": [ 00:19:09.713 { 00:19:09.713 "method": "framework_set_scheduler", 00:19:09.713 "params": { 00:19:09.713 "name": "static" 00:19:09.713 } 00:19:09.713 } 00:19:09.713 ] 00:19:09.713 }, 00:19:09.713 { 00:19:09.713 "subsystem": "vhost_scsi", 00:19:09.713 "config": [] 00:19:09.713 }, 00:19:09.713 { 00:19:09.713 "subsystem": "vhost_blk", 00:19:09.713 "config": [] 00:19:09.713 }, 00:19:09.713 { 00:19:09.713 "subsystem": "ublk", 00:19:09.713 "config": [ 00:19:09.713 { 00:19:09.713 "method": "ublk_create_target", 00:19:09.713 "params": { 00:19:09.713 "cpumask": "1" 00:19:09.713 } 00:19:09.713 }, 00:19:09.713 { 00:19:09.713 "method": "ublk_start_disk", 00:19:09.713 "params": { 00:19:09.713 "bdev_name": "malloc0", 00:19:09.713 "ublk_id": 0, 00:19:09.713 "num_queues": 1, 00:19:09.713 "queue_depth": 128 00:19:09.713 } 00:19:09.713 } 00:19:09.713 ] 00:19:09.713 }, 00:19:09.713 { 00:19:09.713 "subsystem": "nbd", 00:19:09.713 "config": [] 00:19:09.713 }, 00:19:09.713 { 00:19:09.714 "subsystem": "nvmf", 00:19:09.714 "config": [ 00:19:09.714 { 00:19:09.714 "method": "nvmf_set_config", 00:19:09.714 "params": { 00:19:09.714 "discovery_filter": "match_any", 00:19:09.714 "admin_cmd_passthru": { 00:19:09.714 "identify_ctrlr": false 00:19:09.714 }, 00:19:09.714 "dhchap_digests": [ 00:19:09.714 "sha256", 00:19:09.714 "sha384", 00:19:09.714 "sha512" 00:19:09.714 ], 00:19:09.714 "dhchap_dhgroups": [ 00:19:09.714 "null", 00:19:09.714 "ffdhe2048", 00:19:09.714 "ffdhe3072", 00:19:09.714 "ffdhe4096", 00:19:09.714 "ffdhe6144", 00:19:09.714 "ffdhe8192" 00:19:09.714 ] 00:19:09.714 } 00:19:09.714 }, 00:19:09.714 { 00:19:09.714 "method": "nvmf_set_max_subsystems", 00:19:09.714 "params": { 00:19:09.714 "max_subsystems": 1024 00:19:09.714 } 00:19:09.714 }, 00:19:09.714 { 00:19:09.714 "method": "nvmf_set_crdt", 00:19:09.714 "params": { 00:19:09.714 "crdt1": 0, 00:19:09.714 "crdt2": 0, 00:19:09.714 "crdt3": 0 00:19:09.714 } 00:19:09.714 } 00:19:09.714 ] 00:19:09.714 }, 00:19:09.714 { 00:19:09.714 "subsystem": "iscsi", 00:19:09.714 "config": [ 00:19:09.714 { 00:19:09.714 "method": "iscsi_set_options", 00:19:09.714 "params": { 00:19:09.714 "node_base": "iqn.2016-06.io.spdk", 00:19:09.714 "max_sessions": 128, 00:19:09.714 "max_connections_per_session": 2, 00:19:09.714 "max_queue_depth": 64, 00:19:09.714 "default_time2wait": 2, 00:19:09.714 "default_time2retain": 20, 00:19:09.714 "first_burst_length": 8192, 00:19:09.714 "immediate_data": true, 00:19:09.714 "allow_duplicated_isid": false, 00:19:09.714 "error_recovery_level": 0, 00:19:09.714 "nop_timeout": 60, 00:19:09.714 "nop_in_interval": 30, 00:19:09.714 "disable_chap": false, 00:19:09.714 "require_chap": false, 00:19:09.714 "mutual_chap": false, 00:19:09.714 "chap_group": 0, 00:19:09.714 "max_large_datain_per_connection": 64, 00:19:09.714 "max_r2t_per_connection": 4, 00:19:09.714 "pdu_pool_size": 36864, 00:19:09.714 "immediate_data_pool_size": 16384, 00:19:09.714 "data_out_pool_size": 2048 00:19:09.714 } 00:19:09.714 } 00:19:09.714 ] 00:19:09.714 } 00:19:09.714 ] 00:19:09.714 }' 00:19:09.714 15:12:10 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 75444 00:19:09.714 15:12:10 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75444 ']' 00:19:09.714 15:12:10 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75444 00:19:09.714 15:12:10 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:19:09.714 15:12:10 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:09.714 15:12:10 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75444 00:19:09.714 15:12:10 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:09.714 15:12:10 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:09.714 killing process with pid 75444 00:19:09.714 15:12:10 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75444' 00:19:09.714 15:12:10 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75444 00:19:09.714 15:12:10 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75444 00:19:11.143 [2024-11-20 15:12:11.901693] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:19:11.143 [2024-11-20 15:12:11.936861] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:11.143 [2024-11-20 15:12:11.937008] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:19:11.143 [2024-11-20 15:12:11.945746] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:11.143 [2024-11-20 15:12:11.945833] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:19:11.143 [2024-11-20 15:12:11.945852] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:19:11.143 [2024-11-20 15:12:11.945881] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:11.143 [2024-11-20 15:12:11.946057] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:13.680 15:12:13 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=75515 00:19:13.680 15:12:13 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 75515 00:19:13.680 15:12:13 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75515 ']' 00:19:13.680 15:12:13 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:13.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:13.680 15:12:13 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:13.680 15:12:13 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:13.680 15:12:13 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:13.680 15:12:13 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:19:13.680 15:12:13 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:19:13.680 "subsystems": [ 00:19:13.680 { 00:19:13.680 "subsystem": "fsdev", 00:19:13.680 "config": [ 00:19:13.680 { 00:19:13.680 "method": "fsdev_set_opts", 00:19:13.680 "params": { 00:19:13.680 "fsdev_io_pool_size": 65535, 00:19:13.680 "fsdev_io_cache_size": 256 00:19:13.680 } 00:19:13.680 } 00:19:13.680 ] 00:19:13.680 }, 00:19:13.680 { 00:19:13.680 "subsystem": "keyring", 00:19:13.680 "config": [] 00:19:13.680 }, 00:19:13.680 { 00:19:13.680 "subsystem": "iobuf", 00:19:13.680 "config": [ 00:19:13.680 { 00:19:13.680 "method": "iobuf_set_options", 00:19:13.680 "params": { 00:19:13.680 "small_pool_count": 8192, 00:19:13.680 "large_pool_count": 1024, 00:19:13.680 "small_bufsize": 8192, 00:19:13.680 "large_bufsize": 135168, 00:19:13.680 "enable_numa": false 00:19:13.680 } 00:19:13.680 } 00:19:13.680 ] 00:19:13.680 }, 00:19:13.680 { 00:19:13.680 "subsystem": "sock", 00:19:13.680 "config": [ 00:19:13.680 { 00:19:13.680 "method": "sock_set_default_impl", 00:19:13.680 "params": { 00:19:13.680 "impl_name": "posix" 00:19:13.680 } 00:19:13.680 }, 00:19:13.680 { 00:19:13.680 "method": "sock_impl_set_options", 00:19:13.680 "params": { 00:19:13.680 "impl_name": "ssl", 00:19:13.680 "recv_buf_size": 4096, 00:19:13.680 "send_buf_size": 4096, 00:19:13.680 "enable_recv_pipe": true, 00:19:13.680 "enable_quickack": false, 00:19:13.680 "enable_placement_id": 0, 00:19:13.680 "enable_zerocopy_send_server": true, 00:19:13.680 "enable_zerocopy_send_client": false, 00:19:13.680 "zerocopy_threshold": 0, 00:19:13.680 "tls_version": 0, 00:19:13.680 "enable_ktls": false 00:19:13.680 } 00:19:13.680 }, 00:19:13.680 { 00:19:13.680 "method": "sock_impl_set_options", 00:19:13.680 "params": { 00:19:13.680 "impl_name": "posix", 00:19:13.680 "recv_buf_size": 2097152, 00:19:13.680 "send_buf_size": 2097152, 00:19:13.680 "enable_recv_pipe": true, 00:19:13.680 "enable_quickack": false, 00:19:13.680 "enable_placement_id": 0, 00:19:13.680 "enable_zerocopy_send_server": true, 00:19:13.680 "enable_zerocopy_send_client": false, 00:19:13.680 "zerocopy_threshold": 0, 00:19:13.680 "tls_version": 0, 00:19:13.680 "enable_ktls": false 00:19:13.680 } 00:19:13.680 } 00:19:13.680 ] 00:19:13.680 }, 00:19:13.680 { 00:19:13.680 "subsystem": "vmd", 00:19:13.680 "config": [] 00:19:13.680 }, 00:19:13.680 { 00:19:13.680 "subsystem": "accel", 00:19:13.680 "config": [ 00:19:13.680 { 00:19:13.680 "method": "accel_set_options", 00:19:13.680 "params": { 00:19:13.680 "small_cache_size": 128, 00:19:13.680 "large_cache_size": 16, 00:19:13.680 "task_count": 2048, 00:19:13.680 "sequence_count": 2048, 00:19:13.680 "buf_count": 2048 00:19:13.680 } 00:19:13.680 } 00:19:13.680 ] 00:19:13.680 }, 00:19:13.680 { 00:19:13.680 "subsystem": "bdev", 00:19:13.680 "config": [ 00:19:13.680 { 00:19:13.680 "method": "bdev_set_options", 00:19:13.680 "params": { 00:19:13.680 "bdev_io_pool_size": 65535, 00:19:13.680 "bdev_io_cache_size": 256, 00:19:13.680 "bdev_auto_examine": true, 00:19:13.680 "iobuf_small_cache_size": 128, 00:19:13.680 "iobuf_large_cache_size": 16 00:19:13.680 } 00:19:13.680 }, 00:19:13.680 { 00:19:13.680 "method": "bdev_raid_set_options", 00:19:13.680 "params": { 00:19:13.680 "process_window_size_kb": 1024, 00:19:13.680 "process_max_bandwidth_mb_sec": 0 00:19:13.680 } 00:19:13.680 }, 00:19:13.680 { 00:19:13.680 "method": "bdev_iscsi_set_options", 00:19:13.680 "params": { 00:19:13.680 "timeout_sec": 30 00:19:13.680 } 00:19:13.680 }, 00:19:13.680 { 00:19:13.680 "method": "bdev_nvme_set_options", 00:19:13.680 "params": { 00:19:13.680 "action_on_timeout": "none", 00:19:13.680 "timeout_us": 0, 00:19:13.680 "timeout_admin_us": 0, 00:19:13.680 "keep_alive_timeout_ms": 10000, 00:19:13.680 "arbitration_burst": 0, 00:19:13.680 "low_priority_weight": 0, 00:19:13.680 "medium_priority_weight": 0, 00:19:13.680 "high_priority_weight": 0, 00:19:13.680 "nvme_adminq_poll_period_us": 10000, 00:19:13.680 "nvme_ioq_poll_period_us": 0, 00:19:13.680 "io_queue_requests": 0, 00:19:13.680 "delay_cmd_submit": true, 00:19:13.680 "transport_retry_count": 4, 00:19:13.680 "bdev_retry_count": 3, 00:19:13.680 "transport_ack_timeout": 0, 00:19:13.680 "ctrlr_loss_timeout_sec": 0, 00:19:13.680 "reconnect_delay_sec": 0, 00:19:13.680 "fast_io_fail_timeout_sec": 0, 00:19:13.680 "disable_auto_failback": false, 00:19:13.680 "generate_uuids": false, 00:19:13.680 "transport_tos": 0, 00:19:13.680 "nvme_error_stat": false, 00:19:13.680 "rdma_srq_size": 0, 00:19:13.680 "io_path_stat": false, 00:19:13.680 "allow_accel_sequence": false, 00:19:13.680 "rdma_max_cq_size": 0, 00:19:13.680 "rdma_cm_event_timeout_ms": 0, 00:19:13.680 "dhchap_digests": [ 00:19:13.680 "sha256", 00:19:13.680 "sha384", 00:19:13.680 "sha512" 00:19:13.680 ], 00:19:13.680 "dhchap_dhgroups": [ 00:19:13.680 "null", 00:19:13.680 "ffdhe2048", 00:19:13.680 "ffdhe3072", 00:19:13.680 "ffdhe4096", 00:19:13.680 "ffdhe6144", 00:19:13.680 "ffdhe8192" 00:19:13.680 ] 00:19:13.680 } 00:19:13.680 }, 00:19:13.680 { 00:19:13.680 "method": "bdev_nvme_set_hotplug", 00:19:13.680 "params": { 00:19:13.680 "period_us": 100000, 00:19:13.680 "enable": false 00:19:13.680 } 00:19:13.680 }, 00:19:13.680 { 00:19:13.680 "method": "bdev_malloc_create", 00:19:13.680 "params": { 00:19:13.680 "name": "malloc0", 00:19:13.680 "num_blocks": 8192, 00:19:13.680 "block_size": 4096, 00:19:13.680 "physical_block_size": 4096, 00:19:13.680 "uuid": "769e5c1f-7b96-4a02-81f9-fa5b01a14621", 00:19:13.680 "optimal_io_boundary": 0, 00:19:13.680 "md_size": 0, 00:19:13.680 "dif_type": 0, 00:19:13.680 "dif_is_head_of_md": false, 00:19:13.680 "dif_pi_format": 0 00:19:13.680 } 00:19:13.680 }, 00:19:13.680 { 00:19:13.681 "method": "bdev_wait_for_examine" 00:19:13.681 } 00:19:13.681 ] 00:19:13.681 }, 00:19:13.681 { 00:19:13.681 "subsystem": "scsi", 00:19:13.681 "config": null 00:19:13.681 }, 00:19:13.681 { 00:19:13.681 "subsystem": "scheduler", 00:19:13.681 "config": [ 00:19:13.681 { 00:19:13.681 "method": "framework_set_scheduler", 00:19:13.681 "params": { 00:19:13.681 "name": "static" 00:19:13.681 } 00:19:13.681 } 00:19:13.681 ] 00:19:13.681 }, 00:19:13.681 { 00:19:13.681 "subsystem": "vhost_scsi", 00:19:13.681 "config": [] 00:19:13.681 }, 00:19:13.681 { 00:19:13.681 "subsystem": "vhost_blk", 00:19:13.681 "config": [] 00:19:13.681 }, 00:19:13.681 { 00:19:13.681 "subsystem": "ublk", 00:19:13.681 "config": [ 00:19:13.681 { 00:19:13.681 "method": "ublk_create_target", 00:19:13.681 "params": { 00:19:13.681 "cpumask": "1" 00:19:13.681 } 00:19:13.681 }, 00:19:13.681 { 00:19:13.681 "method": "ublk_start_disk", 00:19:13.681 "params": { 00:19:13.681 "bdev_name": "malloc0", 00:19:13.681 "ublk_id": 0, 00:19:13.681 "num_queues": 1, 00:19:13.681 "queue_depth": 128 00:19:13.681 } 00:19:13.681 } 00:19:13.681 ] 00:19:13.681 }, 00:19:13.681 { 00:19:13.681 "subsystem": "nbd", 00:19:13.681 "config": [] 00:19:13.681 }, 00:19:13.681 { 00:19:13.681 "subsystem": "nvmf", 00:19:13.681 "config": [ 00:19:13.681 { 00:19:13.681 "method": "nvmf_set_config", 00:19:13.681 "params": { 00:19:13.681 "discovery_filter": "match_any", 00:19:13.681 "admin_cmd_passthru": { 00:19:13.681 "identify_ctrlr": false 00:19:13.681 }, 00:19:13.681 "dhchap_digests": [ 00:19:13.681 "sha256", 00:19:13.681 "sha384", 00:19:13.681 "sha512" 00:19:13.681 ], 00:19:13.681 "dhchap_dhgroups": [ 00:19:13.681 "null", 00:19:13.681 "ffdhe2048", 00:19:13.681 "ffdhe3072", 00:19:13.681 "ffdhe4096", 00:19:13.681 "ffdhe6144", 00:19:13.681 "ffdhe8192" 00:19:13.681 ] 00:19:13.681 } 00:19:13.681 }, 00:19:13.681 { 00:19:13.681 "method": "nvmf_set_max_subsystems", 00:19:13.681 "params": { 00:19:13.681 "max_subsystems": 1024 00:19:13.681 } 00:19:13.681 }, 00:19:13.681 { 00:19:13.681 "method": "nvmf_set_crdt", 00:19:13.681 "params": { 00:19:13.681 "crdt1": 0, 00:19:13.681 "crdt2": 0, 00:19:13.681 "crdt3": 0 00:19:13.681 } 00:19:13.681 } 00:19:13.681 ] 00:19:13.681 }, 00:19:13.681 { 00:19:13.681 "subsystem": "iscsi", 00:19:13.681 "config": [ 00:19:13.681 { 00:19:13.681 "method": "iscsi_set_options", 00:19:13.681 "params": { 00:19:13.681 "node_base": "iqn.2016-06.io.spdk", 00:19:13.681 "max_sessions": 128, 00:19:13.681 "max_connections_per_session": 2, 00:19:13.681 "max_queue_depth": 64, 00:19:13.681 "default_time2wait": 2, 00:19:13.681 "default_time2retain": 20, 00:19:13.681 "first_burst_length": 8192, 00:19:13.681 "immediate_data": true, 00:19:13.681 "allow_duplicated_isid": false, 00:19:13.681 "error_recovery_level": 0, 00:19:13.681 "nop_timeout": 60, 00:19:13.681 "nop_in_interval": 30, 00:19:13.681 "disable_chap": false, 00:19:13.681 "require_chap": false, 00:19:13.681 "mutual_chap": false, 00:19:13.681 "chap_group": 0, 00:19:13.681 "max_large_datain_per_connection": 64, 00:19:13.681 "max_r2t_per_connection": 4, 00:19:13.681 "pdu_pool_size": 36864, 00:19:13.681 "immediate_data_pool_size": 16384, 00:19:13.681 "data_out_pool_size": 2048 00:19:13.681 } 00:19:13.681 } 00:19:13.681 ] 00:19:13.681 } 00:19:13.681 ] 00:19:13.681 }' 00:19:13.681 15:12:13 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:19:13.681 [2024-11-20 15:12:14.082376] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:19:13.681 [2024-11-20 15:12:14.082566] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75515 ] 00:19:13.681 [2024-11-20 15:12:14.271555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.681 [2024-11-20 15:12:14.413475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:15.062 [2024-11-20 15:12:15.631773] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:19:15.062 [2024-11-20 15:12:15.633187] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:19:15.062 [2024-11-20 15:12:15.640060] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:19:15.062 [2024-11-20 15:12:15.640233] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:19:15.062 [2024-11-20 15:12:15.640251] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:19:15.062 [2024-11-20 15:12:15.640261] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:19:15.062 [2024-11-20 15:12:15.648929] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:15.062 [2024-11-20 15:12:15.648980] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:15.062 [2024-11-20 15:12:15.654842] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:15.062 [2024-11-20 15:12:15.655029] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:19:15.062 [2024-11-20 15:12:15.671800] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:19:15.062 15:12:15 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:15.062 15:12:15 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:19:15.062 15:12:15 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:19:15.062 15:12:15 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:19:15.062 15:12:15 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.062 15:12:15 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:19:15.062 15:12:15 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.062 15:12:15 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:19:15.062 15:12:15 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:19:15.062 15:12:15 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 75515 00:19:15.062 15:12:15 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75515 ']' 00:19:15.062 15:12:15 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75515 00:19:15.062 15:12:15 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:19:15.062 15:12:15 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:15.062 15:12:15 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75515 00:19:15.062 killing process with pid 75515 00:19:15.062 15:12:15 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:15.062 15:12:15 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:15.062 15:12:15 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75515' 00:19:15.062 15:12:15 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75515 00:19:15.062 15:12:15 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75515 00:19:16.970 [2024-11-20 15:12:17.612231] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:19:16.970 [2024-11-20 15:12:17.646767] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:16.970 [2024-11-20 15:12:17.646905] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:19:16.970 [2024-11-20 15:12:17.654752] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:16.970 [2024-11-20 15:12:17.654810] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:19:16.970 [2024-11-20 15:12:17.654820] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:19:16.970 [2024-11-20 15:12:17.654848] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:16.970 [2024-11-20 15:12:17.655019] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:18.877 15:12:19 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:19:18.877 00:19:18.877 real 0m11.316s 00:19:18.877 user 0m8.518s 00:19:18.877 sys 0m3.664s 00:19:18.877 15:12:19 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:18.877 15:12:19 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:19:18.877 ************************************ 00:19:18.877 END TEST test_save_ublk_config 00:19:18.877 ************************************ 00:19:19.137 15:12:19 ublk -- ublk/ublk.sh@139 -- # spdk_pid=75612 00:19:19.137 15:12:19 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:19:19.137 15:12:19 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:19.137 15:12:19 ublk -- ublk/ublk.sh@141 -- # waitforlisten 75612 00:19:19.137 15:12:19 ublk -- common/autotest_common.sh@835 -- # '[' -z 75612 ']' 00:19:19.137 15:12:19 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:19.137 15:12:19 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:19.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:19.137 15:12:19 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:19.137 15:12:19 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:19.137 15:12:19 ublk -- common/autotest_common.sh@10 -- # set +x 00:19:19.137 [2024-11-20 15:12:19.831760] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:19:19.137 [2024-11-20 15:12:19.831910] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75612 ] 00:19:19.396 [2024-11-20 15:12:20.013101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:19.396 [2024-11-20 15:12:20.160895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.396 [2024-11-20 15:12:20.160935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:20.775 15:12:21 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:20.775 15:12:21 ublk -- common/autotest_common.sh@868 -- # return 0 00:19:20.775 15:12:21 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:19:20.775 15:12:21 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:20.775 15:12:21 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:20.775 15:12:21 ublk -- common/autotest_common.sh@10 -- # set +x 00:19:20.775 ************************************ 00:19:20.775 START TEST test_create_ublk 00:19:20.776 ************************************ 00:19:20.776 15:12:21 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:19:20.776 15:12:21 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:19:20.776 15:12:21 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.776 15:12:21 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:20.776 [2024-11-20 15:12:21.214745] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:19:20.776 [2024-11-20 15:12:21.217951] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:19:20.776 15:12:21 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.776 15:12:21 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:19:20.776 15:12:21 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:19:20.776 15:12:21 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.776 15:12:21 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:20.776 15:12:21 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.776 15:12:21 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:19:20.776 15:12:21 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:19:20.776 15:12:21 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.776 15:12:21 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:20.776 [2024-11-20 15:12:21.569934] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:19:20.776 [2024-11-20 15:12:21.570448] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:19:20.776 [2024-11-20 15:12:21.570465] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:19:20.776 [2024-11-20 15:12:21.570474] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:19:20.776 [2024-11-20 15:12:21.577776] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:20.776 [2024-11-20 15:12:21.577801] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:20.776 [2024-11-20 15:12:21.585754] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:20.776 [2024-11-20 15:12:21.586397] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:19:21.035 [2024-11-20 15:12:21.616760] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:19:21.035 15:12:21 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.035 15:12:21 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:19:21.035 15:12:21 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:19:21.035 15:12:21 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:19:21.035 15:12:21 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.035 15:12:21 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:21.035 15:12:21 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.035 15:12:21 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:19:21.035 { 00:19:21.035 "ublk_device": "/dev/ublkb0", 00:19:21.035 "id": 0, 00:19:21.035 "queue_depth": 512, 00:19:21.035 "num_queues": 4, 00:19:21.035 "bdev_name": "Malloc0" 00:19:21.035 } 00:19:21.035 ]' 00:19:21.035 15:12:21 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:19:21.035 15:12:21 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:19:21.035 15:12:21 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:19:21.035 15:12:21 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:19:21.035 15:12:21 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:19:21.035 15:12:21 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:19:21.035 15:12:21 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:19:21.035 15:12:21 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:19:21.035 15:12:21 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:19:21.035 15:12:21 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:19:21.035 15:12:21 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:19:21.035 15:12:21 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:19:21.035 15:12:21 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:19:21.035 15:12:21 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:19:21.035 15:12:21 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:19:21.036 15:12:21 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:19:21.036 15:12:21 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:19:21.036 15:12:21 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:19:21.036 15:12:21 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:19:21.036 15:12:21 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:19:21.036 15:12:21 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:19:21.036 15:12:21 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:19:21.295 fio: verification read phase will never start because write phase uses all of runtime 00:19:21.295 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:19:21.295 fio-3.35 00:19:21.295 Starting 1 process 00:19:31.337 00:19:31.337 fio_test: (groupid=0, jobs=1): err= 0: pid=75664: Wed Nov 20 15:12:32 2024 00:19:31.337 write: IOPS=15.7k, BW=61.4MiB/s (64.3MB/s)(614MiB/10001msec); 0 zone resets 00:19:31.337 clat (usec): min=38, max=4057, avg=62.84, stdev=100.39 00:19:31.337 lat (usec): min=39, max=4057, avg=63.29, stdev=100.40 00:19:31.337 clat percentiles (usec): 00:19:31.337 | 1.00th=[ 41], 5.00th=[ 53], 10.00th=[ 55], 20.00th=[ 56], 00:19:31.337 | 30.00th=[ 57], 40.00th=[ 58], 50.00th=[ 58], 60.00th=[ 59], 00:19:31.337 | 70.00th=[ 60], 80.00th=[ 62], 90.00th=[ 65], 95.00th=[ 70], 00:19:31.337 | 99.00th=[ 86], 99.50th=[ 94], 99.90th=[ 2040], 99.95th=[ 2900], 00:19:31.337 | 99.99th=[ 3687] 00:19:31.337 bw ( KiB/s): min=59496, max=70776, per=100.00%, avg=62978.11, stdev=2373.64, samples=19 00:19:31.337 iops : min=14874, max=17694, avg=15744.53, stdev=593.41, samples=19 00:19:31.337 lat (usec) : 50=2.84%, 100=96.81%, 250=0.15%, 500=0.01%, 750=0.01% 00:19:31.337 lat (usec) : 1000=0.01% 00:19:31.337 lat (msec) : 2=0.07%, 4=0.10%, 10=0.01% 00:19:31.337 cpu : usr=3.22%, sys=10.25%, ctx=157086, majf=0, minf=796 00:19:31.337 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:31.337 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.337 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.337 issued rwts: total=0,157086,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.337 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:31.337 00:19:31.337 Run status group 0 (all jobs): 00:19:31.337 WRITE: bw=61.4MiB/s (64.3MB/s), 61.4MiB/s-61.4MiB/s (64.3MB/s-64.3MB/s), io=614MiB (643MB), run=10001-10001msec 00:19:31.337 00:19:31.337 Disk stats (read/write): 00:19:31.337 ublkb0: ios=0/155461, merge=0/0, ticks=0/8635, in_queue=8636, util=99.15% 00:19:31.337 15:12:32 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:19:31.337 15:12:32 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.337 15:12:32 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:31.337 [2024-11-20 15:12:32.150571] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:19:31.622 [2024-11-20 15:12:32.189810] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:31.622 [2024-11-20 15:12:32.190689] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:19:31.622 [2024-11-20 15:12:32.206784] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:31.622 [2024-11-20 15:12:32.211032] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:19:31.622 [2024-11-20 15:12:32.211060] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:19:31.622 15:12:32 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.622 15:12:32 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:19:31.622 15:12:32 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:19:31.622 15:12:32 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:19:31.622 15:12:32 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:31.622 15:12:32 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:31.622 15:12:32 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:31.622 15:12:32 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:31.622 15:12:32 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:19:31.622 15:12:32 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.622 15:12:32 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:31.622 [2024-11-20 15:12:32.221857] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:19:31.622 request: 00:19:31.622 { 00:19:31.622 "ublk_id": 0, 00:19:31.622 "method": "ublk_stop_disk", 00:19:31.622 "req_id": 1 00:19:31.622 } 00:19:31.622 Got JSON-RPC error response 00:19:31.622 response: 00:19:31.622 { 00:19:31.622 "code": -19, 00:19:31.622 "message": "No such device" 00:19:31.622 } 00:19:31.622 15:12:32 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:31.622 15:12:32 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:19:31.622 15:12:32 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:31.622 15:12:32 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:31.622 15:12:32 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:31.622 15:12:32 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:19:31.622 15:12:32 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.622 15:12:32 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:31.622 [2024-11-20 15:12:32.245858] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:31.622 [2024-11-20 15:12:32.253750] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:31.622 [2024-11-20 15:12:32.253796] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:19:31.622 15:12:32 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.622 15:12:32 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:19:31.622 15:12:32 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.622 15:12:32 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:32.575 15:12:33 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.575 15:12:33 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:19:32.575 15:12:33 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:19:32.575 15:12:33 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.575 15:12:33 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:32.575 15:12:33 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.575 15:12:33 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:19:32.575 15:12:33 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:19:32.575 15:12:33 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:19:32.575 15:12:33 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:19:32.575 15:12:33 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.575 15:12:33 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:32.575 15:12:33 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.575 15:12:33 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:19:32.575 15:12:33 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:19:32.575 15:12:33 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:19:32.575 00:19:32.575 real 0m11.970s 00:19:32.575 user 0m0.708s 00:19:32.575 sys 0m1.174s 00:19:32.575 ************************************ 00:19:32.575 END TEST test_create_ublk 00:19:32.575 ************************************ 00:19:32.575 15:12:33 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:32.575 15:12:33 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:32.575 15:12:33 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:19:32.575 15:12:33 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:32.575 15:12:33 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:32.575 15:12:33 ublk -- common/autotest_common.sh@10 -- # set +x 00:19:32.575 ************************************ 00:19:32.575 START TEST test_create_multi_ublk 00:19:32.575 ************************************ 00:19:32.575 15:12:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:19:32.575 15:12:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:19:32.575 15:12:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.575 15:12:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:32.575 [2024-11-20 15:12:33.253743] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:19:32.575 [2024-11-20 15:12:33.256959] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:19:32.575 15:12:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.575 15:12:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:19:32.575 15:12:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:19:32.575 15:12:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:32.575 15:12:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:19:32.575 15:12:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.575 15:12:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:32.833 15:12:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.833 15:12:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:19:32.833 15:12:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:19:32.833 15:12:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.833 15:12:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:32.833 [2024-11-20 15:12:33.580934] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:19:32.833 [2024-11-20 15:12:33.581477] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:19:32.833 [2024-11-20 15:12:33.581496] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:19:32.833 [2024-11-20 15:12:33.581512] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:19:32.833 [2024-11-20 15:12:33.588784] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:32.833 [2024-11-20 15:12:33.588817] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:32.833 [2024-11-20 15:12:33.596777] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:32.833 [2024-11-20 15:12:33.597533] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:19:32.833 [2024-11-20 15:12:33.610851] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:19:32.833 15:12:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.833 15:12:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:19:32.833 15:12:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:32.833 15:12:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:19:32.833 15:12:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.833 15:12:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:33.400 15:12:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.400 15:12:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:19:33.400 15:12:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:19:33.400 15:12:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.400 15:12:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:33.400 [2024-11-20 15:12:33.981926] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:19:33.400 [2024-11-20 15:12:33.982469] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:19:33.400 [2024-11-20 15:12:33.982486] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:19:33.400 [2024-11-20 15:12:33.982495] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:19:33.400 [2024-11-20 15:12:33.989789] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:33.400 [2024-11-20 15:12:33.989815] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:33.400 [2024-11-20 15:12:33.997750] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:33.400 [2024-11-20 15:12:33.998416] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:19:33.400 [2024-11-20 15:12:34.003281] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:19:33.400 15:12:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.400 15:12:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:19:33.400 15:12:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:33.400 15:12:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:19:33.400 15:12:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.400 15:12:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:33.659 15:12:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.659 15:12:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:19:33.659 15:12:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:19:33.659 15:12:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.659 15:12:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:33.659 [2024-11-20 15:12:34.374905] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:19:33.659 [2024-11-20 15:12:34.375429] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:19:33.659 [2024-11-20 15:12:34.375441] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:19:33.659 [2024-11-20 15:12:34.375453] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:19:33.659 [2024-11-20 15:12:34.382772] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:33.659 [2024-11-20 15:12:34.382802] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:33.659 [2024-11-20 15:12:34.390749] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:33.659 [2024-11-20 15:12:34.391441] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:19:33.659 [2024-11-20 15:12:34.399757] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:19:33.659 15:12:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.659 15:12:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:19:33.659 15:12:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:33.659 15:12:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:19:33.659 15:12:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.659 15:12:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:33.917 15:12:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.917 15:12:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:19:33.917 15:12:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:19:33.917 15:12:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.917 15:12:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:33.917 [2024-11-20 15:12:34.740931] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:19:33.917 [2024-11-20 15:12:34.741447] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:19:33.917 [2024-11-20 15:12:34.741462] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:19:33.917 [2024-11-20 15:12:34.741471] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:19:33.917 [2024-11-20 15:12:34.748777] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:33.917 [2024-11-20 15:12:34.748802] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:34.176 [2024-11-20 15:12:34.756769] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:34.176 [2024-11-20 15:12:34.757421] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:19:34.176 [2024-11-20 15:12:34.765828] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:19:34.176 15:12:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.176 15:12:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:19:34.176 15:12:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:19:34.176 15:12:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.176 15:12:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:34.176 15:12:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.176 15:12:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:19:34.176 { 00:19:34.176 "ublk_device": "/dev/ublkb0", 00:19:34.176 "id": 0, 00:19:34.176 "queue_depth": 512, 00:19:34.176 "num_queues": 4, 00:19:34.176 "bdev_name": "Malloc0" 00:19:34.176 }, 00:19:34.176 { 00:19:34.176 "ublk_device": "/dev/ublkb1", 00:19:34.176 "id": 1, 00:19:34.176 "queue_depth": 512, 00:19:34.176 "num_queues": 4, 00:19:34.176 "bdev_name": "Malloc1" 00:19:34.176 }, 00:19:34.176 { 00:19:34.176 "ublk_device": "/dev/ublkb2", 00:19:34.176 "id": 2, 00:19:34.176 "queue_depth": 512, 00:19:34.176 "num_queues": 4, 00:19:34.176 "bdev_name": "Malloc2" 00:19:34.176 }, 00:19:34.176 { 00:19:34.176 "ublk_device": "/dev/ublkb3", 00:19:34.176 "id": 3, 00:19:34.176 "queue_depth": 512, 00:19:34.176 "num_queues": 4, 00:19:34.176 "bdev_name": "Malloc3" 00:19:34.176 } 00:19:34.176 ]' 00:19:34.176 15:12:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:19:34.176 15:12:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:34.176 15:12:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:19:34.176 15:12:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:19:34.176 15:12:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:19:34.176 15:12:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:19:34.176 15:12:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:19:34.176 15:12:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:19:34.176 15:12:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:19:34.176 15:12:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:19:34.176 15:12:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:19:34.435 15:12:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:19:34.435 15:12:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:34.435 15:12:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:19:34.435 15:12:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:19:34.435 15:12:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:19:34.435 15:12:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:19:34.435 15:12:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:19:34.435 15:12:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:19:34.435 15:12:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:19:34.435 15:12:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:19:34.435 15:12:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:19:34.435 15:12:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:19:34.435 15:12:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:34.435 15:12:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:19:34.692 15:12:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:19:34.692 15:12:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:19:34.692 15:12:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:19:34.692 15:12:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:19:34.692 15:12:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:19:34.692 15:12:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:19:34.693 15:12:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:19:34.693 15:12:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:19:34.693 15:12:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:19:34.693 15:12:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:34.693 15:12:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:19:34.950 15:12:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:19:34.950 15:12:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:19:34.950 15:12:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:19:34.950 15:12:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:19:34.950 15:12:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:19:34.950 15:12:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:19:34.950 15:12:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:19:34.950 15:12:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:19:34.950 15:12:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:19:34.950 15:12:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:19:34.950 15:12:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:19:34.950 15:12:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:34.950 15:12:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:19:34.950 15:12:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.951 15:12:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:34.951 [2024-11-20 15:12:35.734918] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:19:34.951 [2024-11-20 15:12:35.771301] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:34.951 [2024-11-20 15:12:35.772307] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:19:34.951 [2024-11-20 15:12:35.777889] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:34.951 [2024-11-20 15:12:35.778206] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:19:34.951 [2024-11-20 15:12:35.778222] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:19:35.209 15:12:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.209 15:12:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:35.209 15:12:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:19:35.209 15:12:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.209 15:12:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:35.209 [2024-11-20 15:12:35.791861] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:19:35.209 [2024-11-20 15:12:35.830317] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:35.209 [2024-11-20 15:12:35.831416] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:19:35.209 [2024-11-20 15:12:35.837764] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:35.209 [2024-11-20 15:12:35.838086] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:19:35.209 [2024-11-20 15:12:35.838105] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:19:35.209 15:12:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.209 15:12:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:35.209 15:12:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:19:35.209 15:12:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.209 15:12:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:35.209 [2024-11-20 15:12:35.851921] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:19:35.209 [2024-11-20 15:12:35.910831] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:35.209 [2024-11-20 15:12:35.911716] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:19:35.209 [2024-11-20 15:12:35.918926] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:35.209 [2024-11-20 15:12:35.919244] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:19:35.209 [2024-11-20 15:12:35.919260] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:19:35.209 15:12:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.209 15:12:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:35.209 15:12:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:19:35.209 15:12:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.209 15:12:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:35.209 [2024-11-20 15:12:35.934908] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:19:35.209 [2024-11-20 15:12:35.979809] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:35.209 [2024-11-20 15:12:35.980628] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:19:35.209 [2024-11-20 15:12:35.984539] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:35.209 [2024-11-20 15:12:35.984913] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:19:35.209 [2024-11-20 15:12:35.984930] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:19:35.209 15:12:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.209 15:12:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:19:35.467 [2024-11-20 15:12:36.196882] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:35.467 [2024-11-20 15:12:36.204743] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:35.467 [2024-11-20 15:12:36.204799] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:19:35.467 15:12:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:19:35.467 15:12:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:35.467 15:12:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:19:35.467 15:12:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.467 15:12:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:36.403 15:12:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.403 15:12:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:36.403 15:12:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:19:36.403 15:12:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.403 15:12:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:36.694 15:12:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.694 15:12:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:36.694 15:12:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:19:36.694 15:12:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.694 15:12:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:37.262 15:12:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.262 15:12:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:37.262 15:12:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:19:37.262 15:12:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.262 15:12:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:37.520 15:12:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.520 15:12:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:19:37.520 15:12:38 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:19:37.520 15:12:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.520 15:12:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:37.520 15:12:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.520 15:12:38 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:19:37.520 15:12:38 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:19:37.520 15:12:38 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:19:37.520 15:12:38 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:19:37.520 15:12:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.520 15:12:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:37.520 15:12:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.520 15:12:38 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:19:37.520 15:12:38 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:19:37.520 ************************************ 00:19:37.520 END TEST test_create_multi_ublk 00:19:37.520 ************************************ 00:19:37.520 15:12:38 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:19:37.520 00:19:37.520 real 0m5.086s 00:19:37.520 user 0m1.094s 00:19:37.520 sys 0m0.255s 00:19:37.520 15:12:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:37.520 15:12:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:37.779 15:12:38 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:19:37.779 15:12:38 ublk -- ublk/ublk.sh@147 -- # cleanup 00:19:37.779 15:12:38 ublk -- ublk/ublk.sh@130 -- # killprocess 75612 00:19:37.779 15:12:38 ublk -- common/autotest_common.sh@954 -- # '[' -z 75612 ']' 00:19:37.779 15:12:38 ublk -- common/autotest_common.sh@958 -- # kill -0 75612 00:19:37.779 15:12:38 ublk -- common/autotest_common.sh@959 -- # uname 00:19:37.779 15:12:38 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:37.779 15:12:38 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75612 00:19:37.779 killing process with pid 75612 00:19:37.779 15:12:38 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:37.779 15:12:38 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:37.779 15:12:38 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75612' 00:19:37.779 15:12:38 ublk -- common/autotest_common.sh@973 -- # kill 75612 00:19:37.779 15:12:38 ublk -- common/autotest_common.sh@978 -- # wait 75612 00:19:39.154 [2024-11-20 15:12:39.719267] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:39.154 [2024-11-20 15:12:39.719352] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:40.530 00:19:40.530 real 0m33.072s 00:19:40.530 user 0m46.698s 00:19:40.530 sys 0m11.468s 00:19:40.530 15:12:41 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:40.530 15:12:41 ublk -- common/autotest_common.sh@10 -- # set +x 00:19:40.530 ************************************ 00:19:40.530 END TEST ublk 00:19:40.530 ************************************ 00:19:40.530 15:12:41 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:19:40.530 15:12:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:40.530 15:12:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:40.530 15:12:41 -- common/autotest_common.sh@10 -- # set +x 00:19:40.530 ************************************ 00:19:40.530 START TEST ublk_recovery 00:19:40.530 ************************************ 00:19:40.530 15:12:41 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:19:40.531 * Looking for test storage... 00:19:40.531 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:19:40.531 15:12:41 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:40.531 15:12:41 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:19:40.531 15:12:41 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:40.790 15:12:41 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:40.790 15:12:41 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:40.790 15:12:41 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:40.790 15:12:41 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:40.790 15:12:41 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:19:40.790 15:12:41 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:19:40.790 15:12:41 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:19:40.790 15:12:41 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:19:40.790 15:12:41 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:19:40.790 15:12:41 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:19:40.790 15:12:41 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:19:40.790 15:12:41 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:40.790 15:12:41 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:19:40.790 15:12:41 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:19:40.790 15:12:41 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:40.790 15:12:41 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:40.790 15:12:41 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:19:40.790 15:12:41 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:19:40.790 15:12:41 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:40.790 15:12:41 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:19:40.790 15:12:41 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:19:40.790 15:12:41 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:19:40.790 15:12:41 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:19:40.790 15:12:41 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:40.790 15:12:41 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:19:40.790 15:12:41 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:19:40.790 15:12:41 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:40.790 15:12:41 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:40.790 15:12:41 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:19:40.790 15:12:41 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:40.790 15:12:41 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:40.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.790 --rc genhtml_branch_coverage=1 00:19:40.790 --rc genhtml_function_coverage=1 00:19:40.790 --rc genhtml_legend=1 00:19:40.790 --rc geninfo_all_blocks=1 00:19:40.790 --rc geninfo_unexecuted_blocks=1 00:19:40.790 00:19:40.790 ' 00:19:40.790 15:12:41 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:40.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.790 --rc genhtml_branch_coverage=1 00:19:40.790 --rc genhtml_function_coverage=1 00:19:40.790 --rc genhtml_legend=1 00:19:40.790 --rc geninfo_all_blocks=1 00:19:40.790 --rc geninfo_unexecuted_blocks=1 00:19:40.790 00:19:40.790 ' 00:19:40.790 15:12:41 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:40.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.790 --rc genhtml_branch_coverage=1 00:19:40.790 --rc genhtml_function_coverage=1 00:19:40.790 --rc genhtml_legend=1 00:19:40.790 --rc geninfo_all_blocks=1 00:19:40.790 --rc geninfo_unexecuted_blocks=1 00:19:40.790 00:19:40.790 ' 00:19:40.790 15:12:41 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:40.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.790 --rc genhtml_branch_coverage=1 00:19:40.790 --rc genhtml_function_coverage=1 00:19:40.790 --rc genhtml_legend=1 00:19:40.790 --rc geninfo_all_blocks=1 00:19:40.790 --rc geninfo_unexecuted_blocks=1 00:19:40.790 00:19:40.790 ' 00:19:40.790 15:12:41 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:19:40.790 15:12:41 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:19:40.790 15:12:41 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:19:40.790 15:12:41 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:19:40.790 15:12:41 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:19:40.790 15:12:41 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:19:40.790 15:12:41 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:19:40.790 15:12:41 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:19:40.790 15:12:41 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:19:40.790 15:12:41 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:19:40.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.790 15:12:41 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=76044 00:19:40.790 15:12:41 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:40.790 15:12:41 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 76044 00:19:40.790 15:12:41 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 76044 ']' 00:19:40.790 15:12:41 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.790 15:12:41 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:19:40.790 15:12:41 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:40.790 15:12:41 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.790 15:12:41 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:40.790 15:12:41 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.790 [2024-11-20 15:12:41.561092] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:19:40.790 [2024-11-20 15:12:41.561455] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76044 ] 00:19:41.049 [2024-11-20 15:12:41.750794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:41.308 [2024-11-20 15:12:41.901533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.308 [2024-11-20 15:12:41.901590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:42.247 15:12:42 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:42.247 15:12:42 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:19:42.247 15:12:42 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:19:42.247 15:12:42 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.247 15:12:42 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.247 [2024-11-20 15:12:42.956749] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:19:42.247 [2024-11-20 15:12:42.960292] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:19:42.247 15:12:42 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.247 15:12:42 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:19:42.247 15:12:42 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.247 15:12:42 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.506 malloc0 00:19:42.506 15:12:43 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.506 15:12:43 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:19:42.506 15:12:43 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.506 15:12:43 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.506 [2024-11-20 15:12:43.137957] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:19:42.507 [2024-11-20 15:12:43.138132] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:19:42.507 [2024-11-20 15:12:43.138149] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:19:42.507 [2024-11-20 15:12:43.138163] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:19:42.507 [2024-11-20 15:12:43.145784] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:42.507 [2024-11-20 15:12:43.145813] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:42.507 [2024-11-20 15:12:43.153793] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:42.507 [2024-11-20 15:12:43.153995] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:19:42.507 [2024-11-20 15:12:43.176769] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:19:42.507 1 00:19:42.507 15:12:43 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.507 15:12:43 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:19:43.444 15:12:44 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=76089 00:19:43.444 15:12:44 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:19:43.444 15:12:44 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:19:43.704 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:43.704 fio-3.35 00:19:43.704 Starting 1 process 00:19:48.979 15:12:49 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 76044 00:19:48.979 15:12:49 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:19:54.251 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 76044 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:19:54.251 15:12:54 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=76193 00:19:54.251 15:12:54 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:19:54.251 15:12:54 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:54.251 15:12:54 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 76193 00:19:54.251 15:12:54 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 76193 ']' 00:19:54.251 15:12:54 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:54.251 15:12:54 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:54.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:54.251 15:12:54 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:54.251 15:12:54 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:54.251 15:12:54 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:54.251 [2024-11-20 15:12:54.333405] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:19:54.251 [2024-11-20 15:12:54.333580] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76193 ] 00:19:54.251 [2024-11-20 15:12:54.521270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:54.251 [2024-11-20 15:12:54.667803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:54.251 [2024-11-20 15:12:54.667844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:55.187 15:12:55 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:55.187 15:12:55 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:19:55.187 15:12:55 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:19:55.187 15:12:55 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.187 15:12:55 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:55.187 [2024-11-20 15:12:55.717745] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:19:55.187 [2024-11-20 15:12:55.720973] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:19:55.187 15:12:55 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.187 15:12:55 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:19:55.187 15:12:55 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.187 15:12:55 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:55.187 malloc0 00:19:55.187 15:12:55 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.187 15:12:55 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:19:55.187 15:12:55 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.187 15:12:55 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:55.187 [2024-11-20 15:12:55.884955] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:19:55.187 [2024-11-20 15:12:55.885008] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:19:55.187 [2024-11-20 15:12:55.885021] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:19:55.187 [2024-11-20 15:12:55.892787] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:19:55.187 [2024-11-20 15:12:55.892820] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:19:55.187 [2024-11-20 15:12:55.892832] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:19:55.187 [2024-11-20 15:12:55.892947] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:19:55.187 1 00:19:55.187 15:12:55 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.187 15:12:55 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 76089 00:19:55.187 [2024-11-20 15:12:55.900767] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:19:55.187 [2024-11-20 15:12:55.907485] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:19:55.187 [2024-11-20 15:12:55.915000] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:19:55.187 [2024-11-20 15:12:55.915028] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:20:51.420 00:20:51.420 fio_test: (groupid=0, jobs=1): err= 0: pid=76093: Wed Nov 20 15:13:44 2024 00:20:51.420 read: IOPS=21.4k, BW=83.4MiB/s (87.5MB/s)(5006MiB/60002msec) 00:20:51.420 slat (nsec): min=1973, max=909570, avg=7541.43, stdev=3064.55 00:20:51.420 clat (usec): min=1014, max=6727.9k, avg=2926.05, stdev=46008.10 00:20:51.420 lat (usec): min=1022, max=6727.9k, avg=2933.59, stdev=46008.10 00:20:51.420 clat percentiles (usec): 00:20:51.420 | 1.00th=[ 2040], 5.00th=[ 2245], 10.00th=[ 2311], 20.00th=[ 2376], 00:20:51.420 | 30.00th=[ 2409], 40.00th=[ 2442], 50.00th=[ 2474], 60.00th=[ 2507], 00:20:51.420 | 70.00th=[ 2540], 80.00th=[ 2638], 90.00th=[ 2966], 95.00th=[ 3785], 00:20:51.420 | 99.00th=[ 5014], 99.50th=[ 5669], 99.90th=[ 6980], 99.95th=[ 8094], 00:20:51.420 | 99.99th=[12911] 00:20:51.420 bw ( KiB/s): min= 1536, max=101344, per=100.00%, avg=95068.12, stdev=12290.38, samples=107 00:20:51.420 iops : min= 384, max=25336, avg=23766.99, stdev=3072.61, samples=107 00:20:51.420 write: IOPS=21.3k, BW=83.4MiB/s (87.4MB/s)(5002MiB/60002msec); 0 zone resets 00:20:51.420 slat (usec): min=2, max=1040, avg= 7.58, stdev= 3.17 00:20:51.420 clat (usec): min=1003, max=6728.1k, avg=3052.49, stdev=49003.17 00:20:51.420 lat (usec): min=1011, max=6728.1k, avg=3060.07, stdev=49003.18 00:20:51.420 clat percentiles (usec): 00:20:51.420 | 1.00th=[ 2057], 5.00th=[ 2212], 10.00th=[ 2376], 20.00th=[ 2474], 00:20:51.420 | 30.00th=[ 2507], 40.00th=[ 2540], 50.00th=[ 2573], 60.00th=[ 2606], 00:20:51.420 | 70.00th=[ 2671], 80.00th=[ 2737], 90.00th=[ 2999], 95.00th=[ 3785], 00:20:51.420 | 99.00th=[ 5014], 99.50th=[ 5669], 99.90th=[ 7177], 99.95th=[ 8291], 00:20:51.420 | 99.99th=[13042] 00:20:51.420 bw ( KiB/s): min= 1560, max=101624, per=100.00%, avg=95000.21, stdev=12194.45, samples=107 00:20:51.420 iops : min= 390, max=25406, avg=23750.02, stdev=3048.61, samples=107 00:20:51.420 lat (msec) : 2=0.62%, 4=95.33%, 10=4.03%, 20=0.02%, >=2000=0.01% 00:20:51.420 cpu : usr=11.84%, sys=32.15%, ctx=110673, majf=0, minf=14 00:20:51.420 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:20:51.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:51.420 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:51.420 issued rwts: total=1281649,1280506,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:51.420 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:51.420 00:20:51.420 Run status group 0 (all jobs): 00:20:51.420 READ: bw=83.4MiB/s (87.5MB/s), 83.4MiB/s-83.4MiB/s (87.5MB/s-87.5MB/s), io=5006MiB (5250MB), run=60002-60002msec 00:20:51.420 WRITE: bw=83.4MiB/s (87.4MB/s), 83.4MiB/s-83.4MiB/s (87.4MB/s-87.4MB/s), io=5002MiB (5245MB), run=60002-60002msec 00:20:51.420 00:20:51.420 Disk stats (read/write): 00:20:51.420 ublkb1: ios=1279301/1278176, merge=0/0, ticks=3637107/3661691, in_queue=7298799, util=99.96% 00:20:51.420 15:13:44 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:20:51.420 15:13:44 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.420 15:13:44 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:51.420 [2024-11-20 15:13:44.478585] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:20:51.420 [2024-11-20 15:13:44.522830] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:51.420 [2024-11-20 15:13:44.523280] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:20:51.420 [2024-11-20 15:13:44.530774] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:51.420 [2024-11-20 15:13:44.530968] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:20:51.420 [2024-11-20 15:13:44.530994] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:20:51.420 15:13:44 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.420 15:13:44 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:20:51.420 15:13:44 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.420 15:13:44 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:51.420 [2024-11-20 15:13:44.543935] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:20:51.420 [2024-11-20 15:13:44.553770] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:20:51.420 [2024-11-20 15:13:44.553861] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:20:51.420 15:13:44 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.420 15:13:44 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:20:51.420 15:13:44 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:20:51.420 15:13:44 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 76193 00:20:51.420 15:13:44 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 76193 ']' 00:20:51.420 15:13:44 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 76193 00:20:51.420 15:13:44 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:20:51.420 15:13:44 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:51.420 15:13:44 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76193 00:20:51.420 killing process with pid 76193 00:20:51.420 15:13:44 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:51.420 15:13:44 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:51.420 15:13:44 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76193' 00:20:51.420 15:13:44 ublk_recovery -- common/autotest_common.sh@973 -- # kill 76193 00:20:51.420 15:13:44 ublk_recovery -- common/autotest_common.sh@978 -- # wait 76193 00:20:51.420 [2024-11-20 15:13:46.400941] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:20:51.420 [2024-11-20 15:13:46.401038] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:20:51.420 00:20:51.420 real 1m6.821s 00:20:51.420 user 1m50.909s 00:20:51.420 sys 0m38.165s 00:20:51.420 15:13:48 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:51.420 ************************************ 00:20:51.420 END TEST ublk_recovery 00:20:51.420 ************************************ 00:20:51.420 15:13:48 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:51.420 15:13:48 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:20:51.420 15:13:48 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:20:51.420 15:13:48 -- spdk/autotest.sh@260 -- # timing_exit lib 00:20:51.420 15:13:48 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:51.420 15:13:48 -- common/autotest_common.sh@10 -- # set +x 00:20:51.420 15:13:48 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:20:51.420 15:13:48 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:20:51.420 15:13:48 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:20:51.420 15:13:48 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:20:51.420 15:13:48 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:20:51.420 15:13:48 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:20:51.420 15:13:48 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:20:51.420 15:13:48 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:20:51.420 15:13:48 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:20:51.420 15:13:48 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:20:51.420 15:13:48 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:20:51.420 15:13:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:51.420 15:13:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:51.420 15:13:48 -- common/autotest_common.sh@10 -- # set +x 00:20:51.420 ************************************ 00:20:51.420 START TEST ftl 00:20:51.420 ************************************ 00:20:51.420 15:13:48 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:20:51.420 * Looking for test storage... 00:20:51.420 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:51.420 15:13:48 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:51.420 15:13:48 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:20:51.420 15:13:48 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:51.420 15:13:48 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:51.420 15:13:48 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:51.420 15:13:48 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:51.420 15:13:48 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:51.420 15:13:48 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:20:51.420 15:13:48 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:20:51.420 15:13:48 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:20:51.420 15:13:48 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:20:51.420 15:13:48 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:20:51.420 15:13:48 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:20:51.420 15:13:48 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:20:51.420 15:13:48 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:51.420 15:13:48 ftl -- scripts/common.sh@344 -- # case "$op" in 00:20:51.420 15:13:48 ftl -- scripts/common.sh@345 -- # : 1 00:20:51.420 15:13:48 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:51.421 15:13:48 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:51.421 15:13:48 ftl -- scripts/common.sh@365 -- # decimal 1 00:20:51.421 15:13:48 ftl -- scripts/common.sh@353 -- # local d=1 00:20:51.421 15:13:48 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:51.421 15:13:48 ftl -- scripts/common.sh@355 -- # echo 1 00:20:51.421 15:13:48 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:20:51.421 15:13:48 ftl -- scripts/common.sh@366 -- # decimal 2 00:20:51.421 15:13:48 ftl -- scripts/common.sh@353 -- # local d=2 00:20:51.421 15:13:48 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:51.421 15:13:48 ftl -- scripts/common.sh@355 -- # echo 2 00:20:51.421 15:13:48 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:20:51.421 15:13:48 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:51.421 15:13:48 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:51.421 15:13:48 ftl -- scripts/common.sh@368 -- # return 0 00:20:51.421 15:13:48 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:51.421 15:13:48 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:51.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.421 --rc genhtml_branch_coverage=1 00:20:51.421 --rc genhtml_function_coverage=1 00:20:51.421 --rc genhtml_legend=1 00:20:51.421 --rc geninfo_all_blocks=1 00:20:51.421 --rc geninfo_unexecuted_blocks=1 00:20:51.421 00:20:51.421 ' 00:20:51.421 15:13:48 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:51.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.421 --rc genhtml_branch_coverage=1 00:20:51.421 --rc genhtml_function_coverage=1 00:20:51.421 --rc genhtml_legend=1 00:20:51.421 --rc geninfo_all_blocks=1 00:20:51.421 --rc geninfo_unexecuted_blocks=1 00:20:51.421 00:20:51.421 ' 00:20:51.421 15:13:48 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:51.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.421 --rc genhtml_branch_coverage=1 00:20:51.421 --rc genhtml_function_coverage=1 00:20:51.421 --rc genhtml_legend=1 00:20:51.421 --rc geninfo_all_blocks=1 00:20:51.421 --rc geninfo_unexecuted_blocks=1 00:20:51.421 00:20:51.421 ' 00:20:51.421 15:13:48 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:51.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.421 --rc genhtml_branch_coverage=1 00:20:51.421 --rc genhtml_function_coverage=1 00:20:51.421 --rc genhtml_legend=1 00:20:51.421 --rc geninfo_all_blocks=1 00:20:51.421 --rc geninfo_unexecuted_blocks=1 00:20:51.421 00:20:51.421 ' 00:20:51.421 15:13:48 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:51.421 15:13:48 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:20:51.421 15:13:48 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:51.421 15:13:48 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:51.421 15:13:48 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:51.421 15:13:48 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:51.421 15:13:48 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:51.421 15:13:48 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:51.421 15:13:48 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:51.421 15:13:48 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:51.421 15:13:48 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:51.421 15:13:48 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:51.421 15:13:48 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:51.421 15:13:48 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:51.421 15:13:48 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:51.421 15:13:48 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:51.421 15:13:48 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:51.421 15:13:48 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:51.421 15:13:48 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:51.421 15:13:48 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:51.421 15:13:48 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:51.421 15:13:48 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:51.421 15:13:48 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:51.421 15:13:48 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:51.421 15:13:48 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:51.421 15:13:48 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:51.421 15:13:48 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:51.421 15:13:48 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:51.421 15:13:48 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:51.421 15:13:48 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:51.421 15:13:48 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:20:51.421 15:13:48 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:20:51.421 15:13:48 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:20:51.421 15:13:48 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:20:51.421 15:13:48 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:51.421 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:51.421 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:51.421 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:51.421 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:51.421 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:51.421 15:13:49 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=77004 00:20:51.421 15:13:49 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:20:51.421 15:13:49 ftl -- ftl/ftl.sh@38 -- # waitforlisten 77004 00:20:51.421 15:13:49 ftl -- common/autotest_common.sh@835 -- # '[' -z 77004 ']' 00:20:51.421 15:13:49 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:51.421 15:13:49 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:51.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:51.421 15:13:49 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:51.421 15:13:49 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:51.421 15:13:49 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:51.421 [2024-11-20 15:13:49.444236] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:20:51.421 [2024-11-20 15:13:49.444413] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77004 ] 00:20:51.421 [2024-11-20 15:13:49.630817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.421 [2024-11-20 15:13:49.778028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:51.421 15:13:50 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:51.421 15:13:50 ftl -- common/autotest_common.sh@868 -- # return 0 00:20:51.421 15:13:50 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:20:51.421 15:13:50 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:20:51.421 15:13:51 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:20:51.421 15:13:51 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:51.680 15:13:52 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:20:51.680 15:13:52 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:20:51.680 15:13:52 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:20:51.939 15:13:52 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:20:51.939 15:13:52 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:20:51.939 15:13:52 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:20:51.939 15:13:52 ftl -- ftl/ftl.sh@50 -- # break 00:20:51.939 15:13:52 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:20:51.939 15:13:52 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:20:51.939 15:13:52 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:20:51.939 15:13:52 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:20:52.198 15:13:52 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:20:52.198 15:13:52 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:20:52.198 15:13:52 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:20:52.198 15:13:52 ftl -- ftl/ftl.sh@63 -- # break 00:20:52.198 15:13:52 ftl -- ftl/ftl.sh@66 -- # killprocess 77004 00:20:52.198 15:13:52 ftl -- common/autotest_common.sh@954 -- # '[' -z 77004 ']' 00:20:52.198 15:13:52 ftl -- common/autotest_common.sh@958 -- # kill -0 77004 00:20:52.198 15:13:52 ftl -- common/autotest_common.sh@959 -- # uname 00:20:52.198 15:13:52 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:52.198 15:13:52 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77004 00:20:52.198 killing process with pid 77004 00:20:52.198 15:13:52 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:52.198 15:13:52 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:52.198 15:13:52 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77004' 00:20:52.198 15:13:52 ftl -- common/autotest_common.sh@973 -- # kill 77004 00:20:52.198 15:13:52 ftl -- common/autotest_common.sh@978 -- # wait 77004 00:20:55.562 15:13:55 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:20:55.562 15:13:55 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:20:55.562 15:13:55 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:55.562 15:13:55 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:55.562 15:13:55 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:55.562 ************************************ 00:20:55.562 START TEST ftl_fio_basic 00:20:55.562 ************************************ 00:20:55.562 15:13:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:20:55.562 * Looking for test storage... 00:20:55.562 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:55.562 15:13:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:55.562 15:13:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:55.562 15:13:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:20:55.562 15:13:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:55.562 15:13:55 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:55.562 15:13:55 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:55.562 15:13:55 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:55.562 15:13:55 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:20:55.562 15:13:55 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:20:55.562 15:13:55 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:20:55.562 15:13:55 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:20:55.562 15:13:55 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:20:55.562 15:13:55 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:20:55.562 15:13:55 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:20:55.562 15:13:55 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:55.562 15:13:55 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:20:55.562 15:13:55 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:20:55.562 15:13:55 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:55.562 15:13:55 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:55.562 15:13:55 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:20:55.562 15:13:55 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:20:55.562 15:13:55 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:55.562 15:13:55 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:20:55.562 15:13:55 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:20:55.562 15:13:55 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:20:55.562 15:13:55 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:20:55.562 15:13:55 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:55.562 15:13:55 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:20:55.562 15:13:55 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:20:55.562 15:13:55 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:55.562 15:13:55 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:55.562 15:13:55 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:20:55.562 15:13:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:55.562 15:13:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:55.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.562 --rc genhtml_branch_coverage=1 00:20:55.562 --rc genhtml_function_coverage=1 00:20:55.562 --rc genhtml_legend=1 00:20:55.562 --rc geninfo_all_blocks=1 00:20:55.562 --rc geninfo_unexecuted_blocks=1 00:20:55.562 00:20:55.562 ' 00:20:55.562 15:13:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:55.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.562 --rc genhtml_branch_coverage=1 00:20:55.562 --rc genhtml_function_coverage=1 00:20:55.562 --rc genhtml_legend=1 00:20:55.562 --rc geninfo_all_blocks=1 00:20:55.562 --rc geninfo_unexecuted_blocks=1 00:20:55.562 00:20:55.562 ' 00:20:55.562 15:13:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:55.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.562 --rc genhtml_branch_coverage=1 00:20:55.562 --rc genhtml_function_coverage=1 00:20:55.562 --rc genhtml_legend=1 00:20:55.562 --rc geninfo_all_blocks=1 00:20:55.562 --rc geninfo_unexecuted_blocks=1 00:20:55.562 00:20:55.562 ' 00:20:55.562 15:13:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:55.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.562 --rc genhtml_branch_coverage=1 00:20:55.562 --rc genhtml_function_coverage=1 00:20:55.562 --rc genhtml_legend=1 00:20:55.562 --rc geninfo_all_blocks=1 00:20:55.562 --rc geninfo_unexecuted_blocks=1 00:20:55.562 00:20:55.562 ' 00:20:55.562 15:13:55 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:55.562 15:13:55 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:20:55.562 15:13:55 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:55.562 15:13:55 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:55.562 15:13:55 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:55.562 15:13:55 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:55.562 15:13:55 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:55.562 15:13:55 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:55.562 15:13:55 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:55.562 15:13:55 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:55.562 15:13:55 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:55.562 15:13:55 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:55.562 15:13:55 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:55.562 15:13:55 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:55.562 15:13:55 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:55.562 15:13:55 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:55.563 15:13:55 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:55.563 15:13:55 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:55.563 15:13:55 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:55.563 15:13:55 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:55.563 15:13:55 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:55.563 15:13:55 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:55.563 15:13:55 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:55.563 15:13:55 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:55.563 15:13:55 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:55.563 15:13:55 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:55.563 15:13:55 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:55.563 15:13:55 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:55.563 15:13:55 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:55.563 15:13:55 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:20:55.563 15:13:55 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:20:55.563 15:13:55 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:20:55.563 15:13:55 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:20:55.563 15:13:55 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:55.563 15:13:55 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:20:55.563 15:13:55 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:20:55.563 15:13:55 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:20:55.563 15:13:55 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:20:55.563 15:13:55 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:20:55.563 15:13:55 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:20:55.563 15:13:55 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:20:55.563 15:13:55 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:20:55.563 15:13:55 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:20:55.563 15:13:55 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:55.563 15:13:55 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:55.563 15:13:55 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:20:55.563 15:13:55 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=77154 00:20:55.563 15:13:55 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 77154 00:20:55.563 15:13:55 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:20:55.563 15:13:55 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 77154 ']' 00:20:55.563 15:13:55 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:55.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:55.563 15:13:55 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:55.563 15:13:55 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:55.563 15:13:55 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:55.563 15:13:55 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:55.563 [2024-11-20 15:13:56.026070] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:20:55.563 [2024-11-20 15:13:56.026242] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77154 ] 00:20:55.563 [2024-11-20 15:13:56.218506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:55.563 [2024-11-20 15:13:56.375668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:55.563 [2024-11-20 15:13:56.375805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:55.563 [2024-11-20 15:13:56.375840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:56.943 15:13:57 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:56.943 15:13:57 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:20:56.943 15:13:57 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:56.943 15:13:57 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:20:56.943 15:13:57 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:56.943 15:13:57 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:20:56.943 15:13:57 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:20:56.943 15:13:57 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:57.201 15:13:57 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:57.201 15:13:57 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:20:57.201 15:13:57 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:57.201 15:13:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:20:57.201 15:13:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:57.201 15:13:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:20:57.201 15:13:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:20:57.201 15:13:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:57.460 15:13:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:57.460 { 00:20:57.460 "name": "nvme0n1", 00:20:57.460 "aliases": [ 00:20:57.460 "7df95094-13bc-4a27-8b36-8c7f6c49b5e8" 00:20:57.460 ], 00:20:57.460 "product_name": "NVMe disk", 00:20:57.460 "block_size": 4096, 00:20:57.460 "num_blocks": 1310720, 00:20:57.460 "uuid": "7df95094-13bc-4a27-8b36-8c7f6c49b5e8", 00:20:57.460 "numa_id": -1, 00:20:57.460 "assigned_rate_limits": { 00:20:57.460 "rw_ios_per_sec": 0, 00:20:57.460 "rw_mbytes_per_sec": 0, 00:20:57.460 "r_mbytes_per_sec": 0, 00:20:57.460 "w_mbytes_per_sec": 0 00:20:57.460 }, 00:20:57.460 "claimed": false, 00:20:57.460 "zoned": false, 00:20:57.460 "supported_io_types": { 00:20:57.460 "read": true, 00:20:57.460 "write": true, 00:20:57.460 "unmap": true, 00:20:57.460 "flush": true, 00:20:57.460 "reset": true, 00:20:57.460 "nvme_admin": true, 00:20:57.460 "nvme_io": true, 00:20:57.460 "nvme_io_md": false, 00:20:57.460 "write_zeroes": true, 00:20:57.460 "zcopy": false, 00:20:57.460 "get_zone_info": false, 00:20:57.460 "zone_management": false, 00:20:57.460 "zone_append": false, 00:20:57.460 "compare": true, 00:20:57.460 "compare_and_write": false, 00:20:57.460 "abort": true, 00:20:57.460 "seek_hole": false, 00:20:57.460 "seek_data": false, 00:20:57.460 "copy": true, 00:20:57.460 "nvme_iov_md": false 00:20:57.460 }, 00:20:57.460 "driver_specific": { 00:20:57.460 "nvme": [ 00:20:57.460 { 00:20:57.460 "pci_address": "0000:00:11.0", 00:20:57.460 "trid": { 00:20:57.460 "trtype": "PCIe", 00:20:57.460 "traddr": "0000:00:11.0" 00:20:57.460 }, 00:20:57.460 "ctrlr_data": { 00:20:57.460 "cntlid": 0, 00:20:57.460 "vendor_id": "0x1b36", 00:20:57.460 "model_number": "QEMU NVMe Ctrl", 00:20:57.460 "serial_number": "12341", 00:20:57.460 "firmware_revision": "8.0.0", 00:20:57.460 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:57.460 "oacs": { 00:20:57.460 "security": 0, 00:20:57.460 "format": 1, 00:20:57.460 "firmware": 0, 00:20:57.460 "ns_manage": 1 00:20:57.460 }, 00:20:57.460 "multi_ctrlr": false, 00:20:57.460 "ana_reporting": false 00:20:57.460 }, 00:20:57.460 "vs": { 00:20:57.460 "nvme_version": "1.4" 00:20:57.460 }, 00:20:57.460 "ns_data": { 00:20:57.460 "id": 1, 00:20:57.460 "can_share": false 00:20:57.460 } 00:20:57.460 } 00:20:57.460 ], 00:20:57.460 "mp_policy": "active_passive" 00:20:57.460 } 00:20:57.460 } 00:20:57.460 ]' 00:20:57.460 15:13:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:57.460 15:13:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:20:57.460 15:13:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:57.460 15:13:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:20:57.460 15:13:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:20:57.460 15:13:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:20:57.460 15:13:58 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:20:57.460 15:13:58 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:57.460 15:13:58 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:20:57.460 15:13:58 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:57.460 15:13:58 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:57.719 15:13:58 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:20:57.719 15:13:58 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:57.978 15:13:58 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=4d80554a-372c-456a-941d-8023abfeae3f 00:20:57.978 15:13:58 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 4d80554a-372c-456a-941d-8023abfeae3f 00:20:58.236 15:13:58 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=c8adee45-6b8a-4083-8f47-b55c6c4ad59a 00:20:58.236 15:13:58 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 c8adee45-6b8a-4083-8f47-b55c6c4ad59a 00:20:58.236 15:13:58 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:20:58.236 15:13:58 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:58.236 15:13:58 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=c8adee45-6b8a-4083-8f47-b55c6c4ad59a 00:20:58.236 15:13:58 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:20:58.236 15:13:58 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size c8adee45-6b8a-4083-8f47-b55c6c4ad59a 00:20:58.236 15:13:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=c8adee45-6b8a-4083-8f47-b55c6c4ad59a 00:20:58.236 15:13:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:58.236 15:13:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:20:58.236 15:13:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:20:58.236 15:13:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c8adee45-6b8a-4083-8f47-b55c6c4ad59a 00:20:58.512 15:13:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:58.512 { 00:20:58.512 "name": "c8adee45-6b8a-4083-8f47-b55c6c4ad59a", 00:20:58.512 "aliases": [ 00:20:58.512 "lvs/nvme0n1p0" 00:20:58.512 ], 00:20:58.512 "product_name": "Logical Volume", 00:20:58.512 "block_size": 4096, 00:20:58.512 "num_blocks": 26476544, 00:20:58.512 "uuid": "c8adee45-6b8a-4083-8f47-b55c6c4ad59a", 00:20:58.512 "assigned_rate_limits": { 00:20:58.512 "rw_ios_per_sec": 0, 00:20:58.512 "rw_mbytes_per_sec": 0, 00:20:58.512 "r_mbytes_per_sec": 0, 00:20:58.512 "w_mbytes_per_sec": 0 00:20:58.512 }, 00:20:58.512 "claimed": false, 00:20:58.512 "zoned": false, 00:20:58.512 "supported_io_types": { 00:20:58.512 "read": true, 00:20:58.512 "write": true, 00:20:58.512 "unmap": true, 00:20:58.513 "flush": false, 00:20:58.513 "reset": true, 00:20:58.513 "nvme_admin": false, 00:20:58.513 "nvme_io": false, 00:20:58.513 "nvme_io_md": false, 00:20:58.513 "write_zeroes": true, 00:20:58.513 "zcopy": false, 00:20:58.513 "get_zone_info": false, 00:20:58.513 "zone_management": false, 00:20:58.513 "zone_append": false, 00:20:58.513 "compare": false, 00:20:58.513 "compare_and_write": false, 00:20:58.513 "abort": false, 00:20:58.513 "seek_hole": true, 00:20:58.513 "seek_data": true, 00:20:58.513 "copy": false, 00:20:58.513 "nvme_iov_md": false 00:20:58.513 }, 00:20:58.513 "driver_specific": { 00:20:58.513 "lvol": { 00:20:58.513 "lvol_store_uuid": "4d80554a-372c-456a-941d-8023abfeae3f", 00:20:58.513 "base_bdev": "nvme0n1", 00:20:58.513 "thin_provision": true, 00:20:58.513 "num_allocated_clusters": 0, 00:20:58.513 "snapshot": false, 00:20:58.513 "clone": false, 00:20:58.513 "esnap_clone": false 00:20:58.513 } 00:20:58.513 } 00:20:58.513 } 00:20:58.513 ]' 00:20:58.513 15:13:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:58.513 15:13:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:20:58.513 15:13:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:58.513 15:13:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:58.513 15:13:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:58.513 15:13:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:20:58.513 15:13:59 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:20:58.513 15:13:59 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:20:58.513 15:13:59 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:59.083 15:13:59 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:59.083 15:13:59 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:59.083 15:13:59 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size c8adee45-6b8a-4083-8f47-b55c6c4ad59a 00:20:59.083 15:13:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=c8adee45-6b8a-4083-8f47-b55c6c4ad59a 00:20:59.083 15:13:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:59.083 15:13:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:20:59.083 15:13:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:20:59.083 15:13:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c8adee45-6b8a-4083-8f47-b55c6c4ad59a 00:20:59.342 15:13:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:59.342 { 00:20:59.342 "name": "c8adee45-6b8a-4083-8f47-b55c6c4ad59a", 00:20:59.342 "aliases": [ 00:20:59.342 "lvs/nvme0n1p0" 00:20:59.342 ], 00:20:59.342 "product_name": "Logical Volume", 00:20:59.342 "block_size": 4096, 00:20:59.342 "num_blocks": 26476544, 00:20:59.342 "uuid": "c8adee45-6b8a-4083-8f47-b55c6c4ad59a", 00:20:59.342 "assigned_rate_limits": { 00:20:59.342 "rw_ios_per_sec": 0, 00:20:59.342 "rw_mbytes_per_sec": 0, 00:20:59.342 "r_mbytes_per_sec": 0, 00:20:59.342 "w_mbytes_per_sec": 0 00:20:59.342 }, 00:20:59.342 "claimed": false, 00:20:59.342 "zoned": false, 00:20:59.342 "supported_io_types": { 00:20:59.342 "read": true, 00:20:59.342 "write": true, 00:20:59.342 "unmap": true, 00:20:59.342 "flush": false, 00:20:59.342 "reset": true, 00:20:59.342 "nvme_admin": false, 00:20:59.342 "nvme_io": false, 00:20:59.342 "nvme_io_md": false, 00:20:59.342 "write_zeroes": true, 00:20:59.342 "zcopy": false, 00:20:59.342 "get_zone_info": false, 00:20:59.342 "zone_management": false, 00:20:59.342 "zone_append": false, 00:20:59.342 "compare": false, 00:20:59.342 "compare_and_write": false, 00:20:59.342 "abort": false, 00:20:59.342 "seek_hole": true, 00:20:59.342 "seek_data": true, 00:20:59.342 "copy": false, 00:20:59.342 "nvme_iov_md": false 00:20:59.342 }, 00:20:59.342 "driver_specific": { 00:20:59.342 "lvol": { 00:20:59.342 "lvol_store_uuid": "4d80554a-372c-456a-941d-8023abfeae3f", 00:20:59.342 "base_bdev": "nvme0n1", 00:20:59.342 "thin_provision": true, 00:20:59.342 "num_allocated_clusters": 0, 00:20:59.342 "snapshot": false, 00:20:59.342 "clone": false, 00:20:59.342 "esnap_clone": false 00:20:59.342 } 00:20:59.342 } 00:20:59.342 } 00:20:59.342 ]' 00:20:59.342 15:13:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:59.342 15:14:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:20:59.342 15:14:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:59.342 15:14:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:59.342 15:14:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:59.342 15:14:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:20:59.342 15:14:00 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:20:59.342 15:14:00 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:59.601 15:14:00 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:20:59.601 15:14:00 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:20:59.601 15:14:00 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:20:59.601 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:20:59.601 15:14:00 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size c8adee45-6b8a-4083-8f47-b55c6c4ad59a 00:20:59.601 15:14:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=c8adee45-6b8a-4083-8f47-b55c6c4ad59a 00:20:59.601 15:14:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:59.601 15:14:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:20:59.601 15:14:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:20:59.601 15:14:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c8adee45-6b8a-4083-8f47-b55c6c4ad59a 00:20:59.861 15:14:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:59.861 { 00:20:59.861 "name": "c8adee45-6b8a-4083-8f47-b55c6c4ad59a", 00:20:59.861 "aliases": [ 00:20:59.861 "lvs/nvme0n1p0" 00:20:59.861 ], 00:20:59.861 "product_name": "Logical Volume", 00:20:59.861 "block_size": 4096, 00:20:59.861 "num_blocks": 26476544, 00:20:59.861 "uuid": "c8adee45-6b8a-4083-8f47-b55c6c4ad59a", 00:20:59.861 "assigned_rate_limits": { 00:20:59.861 "rw_ios_per_sec": 0, 00:20:59.861 "rw_mbytes_per_sec": 0, 00:20:59.861 "r_mbytes_per_sec": 0, 00:20:59.861 "w_mbytes_per_sec": 0 00:20:59.861 }, 00:20:59.861 "claimed": false, 00:20:59.861 "zoned": false, 00:20:59.861 "supported_io_types": { 00:20:59.861 "read": true, 00:20:59.861 "write": true, 00:20:59.861 "unmap": true, 00:20:59.861 "flush": false, 00:20:59.861 "reset": true, 00:20:59.861 "nvme_admin": false, 00:20:59.861 "nvme_io": false, 00:20:59.861 "nvme_io_md": false, 00:20:59.861 "write_zeroes": true, 00:20:59.861 "zcopy": false, 00:20:59.861 "get_zone_info": false, 00:20:59.861 "zone_management": false, 00:20:59.861 "zone_append": false, 00:20:59.861 "compare": false, 00:20:59.861 "compare_and_write": false, 00:20:59.861 "abort": false, 00:20:59.861 "seek_hole": true, 00:20:59.861 "seek_data": true, 00:20:59.861 "copy": false, 00:20:59.861 "nvme_iov_md": false 00:20:59.861 }, 00:20:59.861 "driver_specific": { 00:20:59.861 "lvol": { 00:20:59.862 "lvol_store_uuid": "4d80554a-372c-456a-941d-8023abfeae3f", 00:20:59.862 "base_bdev": "nvme0n1", 00:20:59.862 "thin_provision": true, 00:20:59.862 "num_allocated_clusters": 0, 00:20:59.862 "snapshot": false, 00:20:59.862 "clone": false, 00:20:59.862 "esnap_clone": false 00:20:59.862 } 00:20:59.862 } 00:20:59.862 } 00:20:59.862 ]' 00:20:59.862 15:14:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:59.862 15:14:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:20:59.862 15:14:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:59.862 15:14:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:59.862 15:14:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:59.862 15:14:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:20:59.862 15:14:00 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:20:59.862 15:14:00 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:20:59.862 15:14:00 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d c8adee45-6b8a-4083-8f47-b55c6c4ad59a -c nvc0n1p0 --l2p_dram_limit 60 00:21:00.122 [2024-11-20 15:14:00.809169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.122 [2024-11-20 15:14:00.809248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:00.122 [2024-11-20 15:14:00.809271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:00.122 [2024-11-20 15:14:00.809283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.122 [2024-11-20 15:14:00.809398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.122 [2024-11-20 15:14:00.809416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:00.122 [2024-11-20 15:14:00.809431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:21:00.122 [2024-11-20 15:14:00.809442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.122 [2024-11-20 15:14:00.809486] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:00.122 [2024-11-20 15:14:00.810667] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:00.122 [2024-11-20 15:14:00.810731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.122 [2024-11-20 15:14:00.810745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:00.122 [2024-11-20 15:14:00.810761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.238 ms 00:21:00.122 [2024-11-20 15:14:00.810772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.122 [2024-11-20 15:14:00.810909] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 5289d8f0-4808-4c17-8566-52f1b160ea94 00:21:00.122 [2024-11-20 15:14:00.813444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.122 [2024-11-20 15:14:00.813496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:00.122 [2024-11-20 15:14:00.813511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:21:00.122 [2024-11-20 15:14:00.813526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.122 [2024-11-20 15:14:00.827135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.122 [2024-11-20 15:14:00.827489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:00.123 [2024-11-20 15:14:00.827520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.493 ms 00:21:00.123 [2024-11-20 15:14:00.827535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.123 [2024-11-20 15:14:00.827780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.123 [2024-11-20 15:14:00.827802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:00.123 [2024-11-20 15:14:00.827815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:21:00.123 [2024-11-20 15:14:00.827836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.123 [2024-11-20 15:14:00.827961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.123 [2024-11-20 15:14:00.827978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:00.123 [2024-11-20 15:14:00.827990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:00.123 [2024-11-20 15:14:00.828005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.123 [2024-11-20 15:14:00.828045] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:00.123 [2024-11-20 15:14:00.834270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.123 [2024-11-20 15:14:00.834479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:00.123 [2024-11-20 15:14:00.834530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.241 ms 00:21:00.123 [2024-11-20 15:14:00.834547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.123 [2024-11-20 15:14:00.834631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.123 [2024-11-20 15:14:00.834645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:00.123 [2024-11-20 15:14:00.834660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:21:00.123 [2024-11-20 15:14:00.834672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.123 [2024-11-20 15:14:00.834757] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:00.123 [2024-11-20 15:14:00.834941] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:00.123 [2024-11-20 15:14:00.834969] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:00.123 [2024-11-20 15:14:00.834985] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:00.123 [2024-11-20 15:14:00.835004] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:00.123 [2024-11-20 15:14:00.835018] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:00.123 [2024-11-20 15:14:00.835034] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:00.123 [2024-11-20 15:14:00.835046] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:00.123 [2024-11-20 15:14:00.835060] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:00.123 [2024-11-20 15:14:00.835072] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:00.123 [2024-11-20 15:14:00.835086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.123 [2024-11-20 15:14:00.835100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:00.123 [2024-11-20 15:14:00.835118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:21:00.123 [2024-11-20 15:14:00.835129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.123 [2024-11-20 15:14:00.835289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.123 [2024-11-20 15:14:00.835306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:00.123 [2024-11-20 15:14:00.835321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:21:00.123 [2024-11-20 15:14:00.835333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.123 [2024-11-20 15:14:00.835462] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:00.123 [2024-11-20 15:14:00.835475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:00.123 [2024-11-20 15:14:00.835494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:00.123 [2024-11-20 15:14:00.835506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:00.123 [2024-11-20 15:14:00.835522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:00.123 [2024-11-20 15:14:00.835532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:00.123 [2024-11-20 15:14:00.835546] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:00.123 [2024-11-20 15:14:00.835556] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:00.123 [2024-11-20 15:14:00.835570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:00.123 [2024-11-20 15:14:00.835580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:00.123 [2024-11-20 15:14:00.835594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:00.123 [2024-11-20 15:14:00.835603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:00.123 [2024-11-20 15:14:00.835618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:00.123 [2024-11-20 15:14:00.835628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:00.123 [2024-11-20 15:14:00.835642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:00.123 [2024-11-20 15:14:00.835652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:00.123 [2024-11-20 15:14:00.835669] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:00.123 [2024-11-20 15:14:00.835679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:00.123 [2024-11-20 15:14:00.835692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:00.123 [2024-11-20 15:14:00.835714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:00.123 [2024-11-20 15:14:00.835727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:00.123 [2024-11-20 15:14:00.835747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:00.123 [2024-11-20 15:14:00.835760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:00.123 [2024-11-20 15:14:00.835770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:00.123 [2024-11-20 15:14:00.835782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:00.123 [2024-11-20 15:14:00.835792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:00.123 [2024-11-20 15:14:00.835805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:00.123 [2024-11-20 15:14:00.835814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:00.123 [2024-11-20 15:14:00.835828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:00.123 [2024-11-20 15:14:00.835838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:00.123 [2024-11-20 15:14:00.835851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:00.123 [2024-11-20 15:14:00.835860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:00.123 [2024-11-20 15:14:00.835875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:00.123 [2024-11-20 15:14:00.835885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:00.123 [2024-11-20 15:14:00.835897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:00.123 [2024-11-20 15:14:00.835924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:00.123 [2024-11-20 15:14:00.835937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:00.123 [2024-11-20 15:14:00.835950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:00.123 [2024-11-20 15:14:00.835962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:00.123 [2024-11-20 15:14:00.835972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:00.123 [2024-11-20 15:14:00.835985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:00.123 [2024-11-20 15:14:00.835994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:00.123 [2024-11-20 15:14:00.836009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:00.123 [2024-11-20 15:14:00.836018] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:00.123 [2024-11-20 15:14:00.836032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:00.123 [2024-11-20 15:14:00.836042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:00.123 [2024-11-20 15:14:00.836056] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:00.123 [2024-11-20 15:14:00.836066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:00.123 [2024-11-20 15:14:00.836082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:00.123 [2024-11-20 15:14:00.836092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:00.123 [2024-11-20 15:14:00.836104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:00.123 [2024-11-20 15:14:00.836114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:00.123 [2024-11-20 15:14:00.836127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:00.123 [2024-11-20 15:14:00.836147] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:00.123 [2024-11-20 15:14:00.836165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:00.123 [2024-11-20 15:14:00.836178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:00.123 [2024-11-20 15:14:00.836192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:00.123 [2024-11-20 15:14:00.836203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:00.123 [2024-11-20 15:14:00.836217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:00.123 [2024-11-20 15:14:00.836227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:00.123 [2024-11-20 15:14:00.836241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:00.123 [2024-11-20 15:14:00.836253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:00.123 [2024-11-20 15:14:00.836266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:00.123 [2024-11-20 15:14:00.836276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:00.123 [2024-11-20 15:14:00.836292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:00.124 [2024-11-20 15:14:00.836302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:00.124 [2024-11-20 15:14:00.836318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:00.124 [2024-11-20 15:14:00.836328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:00.124 [2024-11-20 15:14:00.836342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:00.124 [2024-11-20 15:14:00.836354] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:00.124 [2024-11-20 15:14:00.836370] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:00.124 [2024-11-20 15:14:00.836384] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:00.124 [2024-11-20 15:14:00.836398] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:00.124 [2024-11-20 15:14:00.836408] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:00.124 [2024-11-20 15:14:00.836422] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:00.124 [2024-11-20 15:14:00.836433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.124 [2024-11-20 15:14:00.836447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:00.124 [2024-11-20 15:14:00.836459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.037 ms 00:21:00.124 [2024-11-20 15:14:00.836473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.124 [2024-11-20 15:14:00.836559] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:00.124 [2024-11-20 15:14:00.836586] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:05.428 [2024-11-20 15:14:05.859754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.428 [2024-11-20 15:14:05.859859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:05.428 [2024-11-20 15:14:05.859881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5031.349 ms 00:21:05.428 [2024-11-20 15:14:05.859897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.428 [2024-11-20 15:14:05.909048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.428 [2024-11-20 15:14:05.909138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:05.428 [2024-11-20 15:14:05.909158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.795 ms 00:21:05.428 [2024-11-20 15:14:05.909173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.428 [2024-11-20 15:14:05.909418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.428 [2024-11-20 15:14:05.909436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:05.428 [2024-11-20 15:14:05.909450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:21:05.428 [2024-11-20 15:14:05.909468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.428 [2024-11-20 15:14:05.980469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.428 [2024-11-20 15:14:05.980552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:05.428 [2024-11-20 15:14:05.980578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.054 ms 00:21:05.428 [2024-11-20 15:14:05.980594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.428 [2024-11-20 15:14:05.980673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.428 [2024-11-20 15:14:05.980688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:05.428 [2024-11-20 15:14:05.980701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:05.428 [2024-11-20 15:14:05.980715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.428 [2024-11-20 15:14:05.981651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.428 [2024-11-20 15:14:05.981680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:05.428 [2024-11-20 15:14:05.981692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.794 ms 00:21:05.428 [2024-11-20 15:14:05.981711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.428 [2024-11-20 15:14:05.981884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.428 [2024-11-20 15:14:05.981904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:05.428 [2024-11-20 15:14:05.981916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:21:05.428 [2024-11-20 15:14:05.981934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.428 [2024-11-20 15:14:06.008253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.428 [2024-11-20 15:14:06.008338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:05.428 [2024-11-20 15:14:06.008357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.323 ms 00:21:05.428 [2024-11-20 15:14:06.008382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.428 [2024-11-20 15:14:06.026458] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:05.428 [2024-11-20 15:14:06.055508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.428 [2024-11-20 15:14:06.055630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:05.428 [2024-11-20 15:14:06.055655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.995 ms 00:21:05.428 [2024-11-20 15:14:06.055672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.428 [2024-11-20 15:14:06.155010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.428 [2024-11-20 15:14:06.155088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:05.428 [2024-11-20 15:14:06.155118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.394 ms 00:21:05.428 [2024-11-20 15:14:06.155131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.428 [2024-11-20 15:14:06.155404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.428 [2024-11-20 15:14:06.155421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:05.428 [2024-11-20 15:14:06.155443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.188 ms 00:21:05.428 [2024-11-20 15:14:06.155454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.428 [2024-11-20 15:14:06.202266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.428 [2024-11-20 15:14:06.202358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:05.428 [2024-11-20 15:14:06.202382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.771 ms 00:21:05.428 [2024-11-20 15:14:06.202394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.428 [2024-11-20 15:14:06.247954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.428 [2024-11-20 15:14:06.248040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:05.428 [2024-11-20 15:14:06.248067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.530 ms 00:21:05.428 [2024-11-20 15:14:06.248079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.428 [2024-11-20 15:14:06.249018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.428 [2024-11-20 15:14:06.249051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:05.428 [2024-11-20 15:14:06.249068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.847 ms 00:21:05.428 [2024-11-20 15:14:06.249080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.688 [2024-11-20 15:14:06.371727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.688 [2024-11-20 15:14:06.371822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:05.688 [2024-11-20 15:14:06.371851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 122.700 ms 00:21:05.688 [2024-11-20 15:14:06.371868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.688 [2024-11-20 15:14:06.416855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.688 [2024-11-20 15:14:06.416950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:05.688 [2024-11-20 15:14:06.416973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.877 ms 00:21:05.688 [2024-11-20 15:14:06.416985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.688 [2024-11-20 15:14:06.460822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.688 [2024-11-20 15:14:06.460916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:05.689 [2024-11-20 15:14:06.460939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.800 ms 00:21:05.689 [2024-11-20 15:14:06.460950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.689 [2024-11-20 15:14:06.503207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.689 [2024-11-20 15:14:06.503479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:05.689 [2024-11-20 15:14:06.503518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.229 ms 00:21:05.689 [2024-11-20 15:14:06.503530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.689 [2024-11-20 15:14:06.503627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.689 [2024-11-20 15:14:06.503641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:05.689 [2024-11-20 15:14:06.503667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:05.689 [2024-11-20 15:14:06.503679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.689 [2024-11-20 15:14:06.503947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.689 [2024-11-20 15:14:06.503969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:05.689 [2024-11-20 15:14:06.503985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:21:05.689 [2024-11-20 15:14:06.503997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.689 [2024-11-20 15:14:06.505653] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 5705.161 ms, result 0 00:21:05.689 { 00:21:05.689 "name": "ftl0", 00:21:05.689 "uuid": "5289d8f0-4808-4c17-8566-52f1b160ea94" 00:21:05.689 } 00:21:05.948 15:14:06 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:21:05.948 15:14:06 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:21:05.948 15:14:06 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:05.948 15:14:06 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:21:05.948 15:14:06 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:05.948 15:14:06 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:05.948 15:14:06 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:21:05.948 15:14:06 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:21:06.206 [ 00:21:06.206 { 00:21:06.206 "name": "ftl0", 00:21:06.206 "aliases": [ 00:21:06.206 "5289d8f0-4808-4c17-8566-52f1b160ea94" 00:21:06.206 ], 00:21:06.206 "product_name": "FTL disk", 00:21:06.206 "block_size": 4096, 00:21:06.206 "num_blocks": 20971520, 00:21:06.206 "uuid": "5289d8f0-4808-4c17-8566-52f1b160ea94", 00:21:06.206 "assigned_rate_limits": { 00:21:06.206 "rw_ios_per_sec": 0, 00:21:06.206 "rw_mbytes_per_sec": 0, 00:21:06.206 "r_mbytes_per_sec": 0, 00:21:06.206 "w_mbytes_per_sec": 0 00:21:06.206 }, 00:21:06.206 "claimed": false, 00:21:06.206 "zoned": false, 00:21:06.206 "supported_io_types": { 00:21:06.206 "read": true, 00:21:06.206 "write": true, 00:21:06.206 "unmap": true, 00:21:06.206 "flush": true, 00:21:06.206 "reset": false, 00:21:06.206 "nvme_admin": false, 00:21:06.206 "nvme_io": false, 00:21:06.206 "nvme_io_md": false, 00:21:06.206 "write_zeroes": true, 00:21:06.206 "zcopy": false, 00:21:06.206 "get_zone_info": false, 00:21:06.206 "zone_management": false, 00:21:06.206 "zone_append": false, 00:21:06.206 "compare": false, 00:21:06.206 "compare_and_write": false, 00:21:06.206 "abort": false, 00:21:06.206 "seek_hole": false, 00:21:06.206 "seek_data": false, 00:21:06.206 "copy": false, 00:21:06.206 "nvme_iov_md": false 00:21:06.206 }, 00:21:06.206 "driver_specific": { 00:21:06.206 "ftl": { 00:21:06.206 "base_bdev": "c8adee45-6b8a-4083-8f47-b55c6c4ad59a", 00:21:06.206 "cache": "nvc0n1p0" 00:21:06.206 } 00:21:06.206 } 00:21:06.206 } 00:21:06.206 ] 00:21:06.206 15:14:06 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:21:06.206 15:14:06 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:21:06.206 15:14:07 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:06.465 15:14:07 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:21:06.465 15:14:07 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:21:06.724 [2024-11-20 15:14:07.444361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.724 [2024-11-20 15:14:07.444442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:06.724 [2024-11-20 15:14:07.444462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:06.724 [2024-11-20 15:14:07.444477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.724 [2024-11-20 15:14:07.444520] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:06.724 [2024-11-20 15:14:07.449322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.724 [2024-11-20 15:14:07.449370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:06.724 [2024-11-20 15:14:07.449392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.781 ms 00:21:06.724 [2024-11-20 15:14:07.449404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.724 [2024-11-20 15:14:07.449947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.724 [2024-11-20 15:14:07.449973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:06.724 [2024-11-20 15:14:07.449990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.485 ms 00:21:06.724 [2024-11-20 15:14:07.450000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.724 [2024-11-20 15:14:07.452597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.724 [2024-11-20 15:14:07.452790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:06.724 [2024-11-20 15:14:07.452822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.567 ms 00:21:06.724 [2024-11-20 15:14:07.452835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.724 [2024-11-20 15:14:07.458057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.724 [2024-11-20 15:14:07.458096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:06.724 [2024-11-20 15:14:07.458113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.176 ms 00:21:06.724 [2024-11-20 15:14:07.458124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.724 [2024-11-20 15:14:07.502110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.724 [2024-11-20 15:14:07.502206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:06.724 [2024-11-20 15:14:07.502228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.911 ms 00:21:06.724 [2024-11-20 15:14:07.502240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.725 [2024-11-20 15:14:07.529348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.725 [2024-11-20 15:14:07.529422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:06.725 [2024-11-20 15:14:07.529451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.015 ms 00:21:06.725 [2024-11-20 15:14:07.529463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.725 [2024-11-20 15:14:07.529803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.725 [2024-11-20 15:14:07.529822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:06.725 [2024-11-20 15:14:07.529839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.234 ms 00:21:06.725 [2024-11-20 15:14:07.529850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.985 [2024-11-20 15:14:07.576516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.985 [2024-11-20 15:14:07.576617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:06.985 [2024-11-20 15:14:07.576640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.700 ms 00:21:06.985 [2024-11-20 15:14:07.576652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.985 [2024-11-20 15:14:07.620780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.985 [2024-11-20 15:14:07.621166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:06.985 [2024-11-20 15:14:07.621206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.063 ms 00:21:06.985 [2024-11-20 15:14:07.621220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.985 [2024-11-20 15:14:07.666152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.985 [2024-11-20 15:14:07.666249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:06.985 [2024-11-20 15:14:07.666274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.891 ms 00:21:06.985 [2024-11-20 15:14:07.666286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.985 [2024-11-20 15:14:07.712245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.985 [2024-11-20 15:14:07.712583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:06.985 [2024-11-20 15:14:07.712623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.779 ms 00:21:06.985 [2024-11-20 15:14:07.712636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.985 [2024-11-20 15:14:07.712764] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:06.985 [2024-11-20 15:14:07.712789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:06.985 [2024-11-20 15:14:07.712808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:06.985 [2024-11-20 15:14:07.712822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:06.985 [2024-11-20 15:14:07.712838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:06.985 [2024-11-20 15:14:07.712851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:06.985 [2024-11-20 15:14:07.712868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:06.985 [2024-11-20 15:14:07.712881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:06.985 [2024-11-20 15:14:07.712903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:06.985 [2024-11-20 15:14:07.712916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:06.985 [2024-11-20 15:14:07.712932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:06.985 [2024-11-20 15:14:07.712945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:06.985 [2024-11-20 15:14:07.712962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:06.985 [2024-11-20 15:14:07.712974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:06.985 [2024-11-20 15:14:07.712989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:06.985 [2024-11-20 15:14:07.713008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:06.985 [2024-11-20 15:14:07.713032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:06.985 [2024-11-20 15:14:07.713051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.713073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.713092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.713115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.713127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.713147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.713163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.713195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.713214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.713230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.713243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.713272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.713284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.713302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.713327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.713351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.713364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.713380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.713392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.713407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.713419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.713435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.713447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.713466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.713478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.713493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.713505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.713521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.713533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.713548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.713560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.713586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.713598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.713614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.713626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.713642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.713653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.713669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.713681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.713717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.713731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.713765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.713778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.713794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.713806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.713822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.713837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.713880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.713893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.713908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.713920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.713936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.713948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.713964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.713977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.713996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.714021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.714046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.714068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.714086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.714099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.714114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.714126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.714159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.714171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.714186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.714198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.714257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.714270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.714286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.714299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.714317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.714329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.714345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:06.986 [2024-11-20 15:14:07.714357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:06.987 [2024-11-20 15:14:07.714373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:06.987 [2024-11-20 15:14:07.714385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:06.987 [2024-11-20 15:14:07.714400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:06.987 [2024-11-20 15:14:07.714417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:06.987 [2024-11-20 15:14:07.714434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:06.987 [2024-11-20 15:14:07.714447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:06.987 [2024-11-20 15:14:07.714463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:06.987 [2024-11-20 15:14:07.714475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:06.987 [2024-11-20 15:14:07.714493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:06.987 [2024-11-20 15:14:07.714523] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:06.987 [2024-11-20 15:14:07.714540] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5289d8f0-4808-4c17-8566-52f1b160ea94 00:21:06.987 [2024-11-20 15:14:07.714554] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:06.987 [2024-11-20 15:14:07.714572] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:06.987 [2024-11-20 15:14:07.714584] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:06.987 [2024-11-20 15:14:07.714606] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:06.987 [2024-11-20 15:14:07.714617] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:06.987 [2024-11-20 15:14:07.714645] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:06.987 [2024-11-20 15:14:07.714656] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:06.987 [2024-11-20 15:14:07.714669] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:06.987 [2024-11-20 15:14:07.714679] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:06.987 [2024-11-20 15:14:07.714695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.987 [2024-11-20 15:14:07.714707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:06.987 [2024-11-20 15:14:07.714723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.937 ms 00:21:06.987 [2024-11-20 15:14:07.714745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.987 [2024-11-20 15:14:07.739090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.987 [2024-11-20 15:14:07.739406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:06.987 [2024-11-20 15:14:07.739443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.201 ms 00:21:06.987 [2024-11-20 15:14:07.739457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.987 [2024-11-20 15:14:07.740154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.987 [2024-11-20 15:14:07.740172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:06.987 [2024-11-20 15:14:07.740188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.612 ms 00:21:06.987 [2024-11-20 15:14:07.740199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.246 [2024-11-20 15:14:07.822450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.246 [2024-11-20 15:14:07.822551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:07.246 [2024-11-20 15:14:07.822583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.246 [2024-11-20 15:14:07.822603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.246 [2024-11-20 15:14:07.822754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.246 [2024-11-20 15:14:07.822770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:07.246 [2024-11-20 15:14:07.822786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.246 [2024-11-20 15:14:07.822810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.246 [2024-11-20 15:14:07.823042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.246 [2024-11-20 15:14:07.823064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:07.246 [2024-11-20 15:14:07.823081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.246 [2024-11-20 15:14:07.823093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.246 [2024-11-20 15:14:07.823153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.246 [2024-11-20 15:14:07.823169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:07.246 [2024-11-20 15:14:07.823185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.246 [2024-11-20 15:14:07.823196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.246 [2024-11-20 15:14:07.980181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.246 [2024-11-20 15:14:07.980280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:07.246 [2024-11-20 15:14:07.980305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.246 [2024-11-20 15:14:07.980318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.508 [2024-11-20 15:14:08.095456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.508 [2024-11-20 15:14:08.095551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:07.508 [2024-11-20 15:14:08.095572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.508 [2024-11-20 15:14:08.095584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.508 [2024-11-20 15:14:08.095777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.508 [2024-11-20 15:14:08.095793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:07.508 [2024-11-20 15:14:08.095814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.508 [2024-11-20 15:14:08.095825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.508 [2024-11-20 15:14:08.095921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.508 [2024-11-20 15:14:08.095935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:07.508 [2024-11-20 15:14:08.095950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.508 [2024-11-20 15:14:08.095961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.508 [2024-11-20 15:14:08.096102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.508 [2024-11-20 15:14:08.096116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:07.508 [2024-11-20 15:14:08.096131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.508 [2024-11-20 15:14:08.096145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.508 [2024-11-20 15:14:08.096212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.508 [2024-11-20 15:14:08.096226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:07.508 [2024-11-20 15:14:08.096240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.508 [2024-11-20 15:14:08.096251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.508 [2024-11-20 15:14:08.096312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.508 [2024-11-20 15:14:08.096324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:07.508 [2024-11-20 15:14:08.096338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.508 [2024-11-20 15:14:08.096349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.508 [2024-11-20 15:14:08.096426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.508 [2024-11-20 15:14:08.096439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:07.508 [2024-11-20 15:14:08.096456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.508 [2024-11-20 15:14:08.096466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.508 [2024-11-20 15:14:08.096667] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 653.332 ms, result 0 00:21:07.508 true 00:21:07.508 15:14:08 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 77154 00:21:07.508 15:14:08 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 77154 ']' 00:21:07.508 15:14:08 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 77154 00:21:07.508 15:14:08 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:21:07.508 15:14:08 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:07.508 15:14:08 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77154 00:21:07.508 killing process with pid 77154 00:21:07.508 15:14:08 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:07.508 15:14:08 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:07.508 15:14:08 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77154' 00:21:07.508 15:14:08 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 77154 00:21:07.508 15:14:08 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 77154 00:21:12.881 15:14:13 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:21:12.881 15:14:13 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:21:12.881 15:14:13 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:21:12.881 15:14:13 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:12.881 15:14:13 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:12.881 15:14:13 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:21:12.881 15:14:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:21:12.881 15:14:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:12.881 15:14:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:12.881 15:14:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:12.881 15:14:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:12.881 15:14:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:21:12.881 15:14:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:12.881 15:14:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:12.881 15:14:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:21:12.881 15:14:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:12.881 15:14:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:12.881 15:14:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:12.881 15:14:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:12.881 15:14:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:21:12.881 15:14:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:12.882 15:14:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:21:12.882 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:21:12.882 fio-3.35 00:21:12.882 Starting 1 thread 00:21:18.155 00:21:18.155 test: (groupid=0, jobs=1): err= 0: pid=77396: Wed Nov 20 15:14:18 2024 00:21:18.155 read: IOPS=970, BW=64.4MiB/s (67.6MB/s)(255MiB/3951msec) 00:21:18.155 slat (nsec): min=4801, max=35154, avg=7338.08, stdev=3417.04 00:21:18.155 clat (usec): min=292, max=1036, avg=456.70, stdev=65.25 00:21:18.155 lat (usec): min=308, max=1053, avg=464.04, stdev=65.62 00:21:18.155 clat percentiles (usec): 00:21:18.155 | 1.00th=[ 330], 5.00th=[ 351], 10.00th=[ 396], 20.00th=[ 408], 00:21:18.155 | 30.00th=[ 420], 40.00th=[ 429], 50.00th=[ 449], 60.00th=[ 474], 00:21:18.155 | 70.00th=[ 486], 80.00th=[ 502], 90.00th=[ 537], 95.00th=[ 562], 00:21:18.155 | 99.00th=[ 635], 99.50th=[ 693], 99.90th=[ 889], 99.95th=[ 914], 00:21:18.155 | 99.99th=[ 1037] 00:21:18.155 write: IOPS=976, BW=64.9MiB/s (68.0MB/s)(256MiB/3947msec); 0 zone resets 00:21:18.155 slat (nsec): min=15820, max=98965, avg=21168.69, stdev=6347.27 00:21:18.155 clat (usec): min=350, max=60435, avg=531.03, stdev=967.77 00:21:18.155 lat (usec): min=368, max=60454, avg=552.19, stdev=967.77 00:21:18.155 clat percentiles (usec): 00:21:18.155 | 1.00th=[ 408], 5.00th=[ 420], 10.00th=[ 433], 20.00th=[ 445], 00:21:18.155 | 30.00th=[ 469], 40.00th=[ 498], 50.00th=[ 510], 60.00th=[ 523], 00:21:18.155 | 70.00th=[ 545], 80.00th=[ 570], 90.00th=[ 603], 95.00th=[ 635], 00:21:18.155 | 99.00th=[ 766], 99.50th=[ 832], 99.90th=[ 938], 99.95th=[ 1004], 00:21:18.155 | 99.99th=[60556] 00:21:18.155 bw ( KiB/s): min=57120, max=70312, per=100.00%, avg=66562.29, stdev=4460.34, samples=7 00:21:18.155 iops : min= 840, max= 1034, avg=978.86, stdev=65.59, samples=7 00:21:18.155 lat (usec) : 500=60.81%, 750=38.39%, 1000=0.75% 00:21:18.155 lat (msec) : 2=0.03%, 100=0.01% 00:21:18.155 cpu : usr=98.46%, sys=0.51%, ctx=10, majf=0, minf=1170 00:21:18.155 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:18.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.155 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.155 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:18.155 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:18.155 00:21:18.155 Run status group 0 (all jobs): 00:21:18.155 READ: bw=64.4MiB/s (67.6MB/s), 64.4MiB/s-64.4MiB/s (67.6MB/s-67.6MB/s), io=255MiB (267MB), run=3951-3951msec 00:21:18.155 WRITE: bw=64.9MiB/s (68.0MB/s), 64.9MiB/s-64.9MiB/s (68.0MB/s-68.0MB/s), io=256MiB (269MB), run=3947-3947msec 00:21:20.707 ----------------------------------------------------- 00:21:20.707 Suppressions used: 00:21:20.707 count bytes template 00:21:20.707 1 5 /usr/src/fio/parse.c 00:21:20.707 1 8 libtcmalloc_minimal.so 00:21:20.707 1 904 libcrypto.so 00:21:20.707 ----------------------------------------------------- 00:21:20.707 00:21:20.707 15:14:21 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:21:20.707 15:14:21 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:20.707 15:14:21 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:20.707 15:14:21 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:21:20.707 15:14:21 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:21:20.707 15:14:21 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:20.707 15:14:21 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:20.707 15:14:21 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:21:20.707 15:14:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:21:20.707 15:14:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:20.707 15:14:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:20.707 15:14:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:20.707 15:14:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:20.707 15:14:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:21:20.707 15:14:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:20.707 15:14:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:20.707 15:14:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:21:20.707 15:14:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:20.707 15:14:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:20.707 15:14:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:20.707 15:14:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:20.707 15:14:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:21:20.707 15:14:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:20.707 15:14:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:21:20.707 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:21:20.707 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:21:20.707 fio-3.35 00:21:20.707 Starting 2 threads 00:21:52.789 00:21:52.789 first_half: (groupid=0, jobs=1): err= 0: pid=77509: Wed Nov 20 15:14:48 2024 00:21:52.789 read: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(255MiB/25635msec) 00:21:52.789 slat (nsec): min=3638, max=85682, avg=6687.97, stdev=2099.38 00:21:52.789 clat (usec): min=1046, max=297695, avg=39479.42, stdev=21149.85 00:21:52.789 lat (usec): min=1054, max=297701, avg=39486.11, stdev=21150.10 00:21:52.789 clat percentiles (msec): 00:21:52.789 | 1.00th=[ 16], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:21:52.789 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 35], 60.00th=[ 35], 00:21:52.789 | 70.00th=[ 36], 80.00th=[ 40], 90.00th=[ 46], 95.00th=[ 66], 00:21:52.789 | 99.00th=[ 165], 99.50th=[ 182], 99.90th=[ 211], 99.95th=[ 226], 00:21:52.789 | 99.99th=[ 288] 00:21:52.789 write: IOPS=3040, BW=11.9MiB/s (12.5MB/s)(256MiB/21553msec); 0 zone resets 00:21:52.789 slat (usec): min=4, max=696, avg= 8.73, stdev= 7.77 00:21:52.789 clat (usec): min=434, max=116632, avg=10698.25, stdev=17620.49 00:21:52.789 lat (usec): min=443, max=116640, avg=10706.98, stdev=17620.62 00:21:52.790 clat percentiles (usec): 00:21:52.790 | 1.00th=[ 1090], 5.00th=[ 1418], 10.00th=[ 1663], 20.00th=[ 2147], 00:21:52.790 | 30.00th=[ 3654], 40.00th=[ 5276], 50.00th=[ 6259], 60.00th=[ 7242], 00:21:52.790 | 70.00th=[ 8455], 80.00th=[ 11338], 90.00th=[ 14484], 95.00th=[ 45876], 00:21:52.790 | 99.00th=[ 89654], 99.50th=[101188], 99.90th=[110625], 99.95th=[112722], 00:21:52.790 | 99.99th=[114820] 00:21:52.790 bw ( KiB/s): min= 1008, max=42112, per=97.92%, avg=21841.08, stdev=13859.71, samples=24 00:21:52.790 iops : min= 252, max=10528, avg=5460.25, stdev=3464.90, samples=24 00:21:52.790 lat (usec) : 500=0.01%, 750=0.07%, 1000=0.23% 00:21:52.790 lat (msec) : 2=8.62%, 4=7.30%, 10=22.22%, 20=8.57%, 50=46.28% 00:21:52.790 lat (msec) : 100=5.11%, 250=1.57%, 500=0.02% 00:21:52.790 cpu : usr=99.20%, sys=0.23%, ctx=42, majf=0, minf=5575 00:21:52.790 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:21:52.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:52.790 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:52.790 issued rwts: total=65308,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:52.790 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:52.790 second_half: (groupid=0, jobs=1): err= 0: pid=77510: Wed Nov 20 15:14:48 2024 00:21:52.790 read: IOPS=2529, BW=9.88MiB/s (10.4MB/s)(255MiB/25822msec) 00:21:52.790 slat (usec): min=3, max=113, avg= 6.72, stdev= 2.08 00:21:52.790 clat (usec): min=922, max=309519, avg=38769.51, stdev=23466.69 00:21:52.790 lat (usec): min=931, max=309527, avg=38776.24, stdev=23466.96 00:21:52.790 clat percentiles (msec): 00:21:52.790 | 1.00th=[ 9], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:21:52.790 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 35], 60.00th=[ 35], 00:21:52.790 | 70.00th=[ 36], 80.00th=[ 39], 90.00th=[ 43], 95.00th=[ 57], 00:21:52.790 | 99.00th=[ 176], 99.50th=[ 190], 99.90th=[ 224], 99.95th=[ 236], 00:21:52.790 | 99.99th=[ 305] 00:21:52.790 write: IOPS=2788, BW=10.9MiB/s (11.4MB/s)(256MiB/23505msec); 0 zone resets 00:21:52.790 slat (usec): min=4, max=388, avg= 8.95, stdev= 5.13 00:21:52.790 clat (usec): min=423, max=116175, avg=11769.03, stdev=19136.99 00:21:52.790 lat (usec): min=438, max=116186, avg=11777.97, stdev=19137.16 00:21:52.790 clat percentiles (usec): 00:21:52.790 | 1.00th=[ 1037], 5.00th=[ 1352], 10.00th=[ 1582], 20.00th=[ 1926], 00:21:52.790 | 30.00th=[ 2376], 40.00th=[ 4080], 50.00th=[ 5866], 60.00th=[ 7177], 00:21:52.790 | 70.00th=[ 8717], 80.00th=[ 12649], 90.00th=[ 31851], 95.00th=[ 57410], 00:21:52.790 | 99.00th=[ 91751], 99.50th=[101188], 99.90th=[110625], 99.95th=[112722], 00:21:52.790 | 99.99th=[114820] 00:21:52.790 bw ( KiB/s): min= 1368, max=49944, per=90.41%, avg=20166.73, stdev=14784.80, samples=26 00:21:52.790 iops : min= 342, max=12486, avg=5041.65, stdev=3696.22, samples=26 00:21:52.790 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.33% 00:21:52.790 lat (msec) : 2=10.89%, 4=8.55%, 10=18.51%, 20=7.74%, 50=48.15% 00:21:52.790 lat (msec) : 100=3.84%, 250=1.92%, 500=0.01% 00:21:52.790 cpu : usr=99.18%, sys=0.23%, ctx=64, majf=0, minf=5532 00:21:52.790 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:21:52.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:52.790 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:52.790 issued rwts: total=65317,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:52.790 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:52.790 00:21:52.790 Run status group 0 (all jobs): 00:21:52.790 READ: bw=19.8MiB/s (20.7MB/s), 9.88MiB/s-9.95MiB/s (10.4MB/s-10.4MB/s), io=510MiB (535MB), run=25635-25822msec 00:21:52.790 WRITE: bw=21.8MiB/s (22.8MB/s), 10.9MiB/s-11.9MiB/s (11.4MB/s-12.5MB/s), io=512MiB (537MB), run=21553-23505msec 00:21:52.790 ----------------------------------------------------- 00:21:52.790 Suppressions used: 00:21:52.790 count bytes template 00:21:52.790 2 10 /usr/src/fio/parse.c 00:21:52.790 4 384 /usr/src/fio/iolog.c 00:21:52.790 1 8 libtcmalloc_minimal.so 00:21:52.790 1 904 libcrypto.so 00:21:52.790 ----------------------------------------------------- 00:21:52.790 00:21:52.790 15:14:51 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:21:52.790 15:14:51 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:52.790 15:14:51 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:52.790 15:14:51 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:21:52.790 15:14:51 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:21:52.790 15:14:51 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:52.790 15:14:51 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:52.790 15:14:51 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:21:52.790 15:14:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:21:52.790 15:14:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:52.790 15:14:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:52.790 15:14:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:52.790 15:14:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:52.790 15:14:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:21:52.790 15:14:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:52.790 15:14:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:52.790 15:14:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:52.790 15:14:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:52.790 15:14:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:21:52.790 15:14:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:52.790 15:14:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:52.790 15:14:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:21:52.790 15:14:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:52.790 15:14:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:21:52.790 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:21:52.790 fio-3.35 00:21:52.790 Starting 1 thread 00:22:07.731 00:22:07.731 test: (groupid=0, jobs=1): err= 0: pid=77851: Wed Nov 20 15:15:08 2024 00:22:07.731 read: IOPS=7188, BW=28.1MiB/s (29.4MB/s)(255MiB/9070msec) 00:22:07.731 slat (nsec): min=3763, max=35894, avg=6069.52, stdev=1818.90 00:22:07.731 clat (usec): min=746, max=37763, avg=17794.52, stdev=1663.92 00:22:07.731 lat (usec): min=753, max=37768, avg=17800.59, stdev=1664.03 00:22:07.731 clat percentiles (usec): 00:22:07.731 | 1.00th=[16188], 5.00th=[16581], 10.00th=[16712], 20.00th=[16909], 00:22:07.731 | 30.00th=[16909], 40.00th=[17171], 50.00th=[17433], 60.00th=[17695], 00:22:07.731 | 70.00th=[17957], 80.00th=[18220], 90.00th=[19006], 95.00th=[19792], 00:22:07.731 | 99.00th=[25560], 99.50th=[28443], 99.90th=[32375], 99.95th=[33424], 00:22:07.731 | 99.99th=[36963] 00:22:07.731 write: IOPS=11.5k, BW=44.9MiB/s (47.1MB/s)(256MiB/5701msec); 0 zone resets 00:22:07.731 slat (usec): min=4, max=640, avg= 9.95, stdev= 7.03 00:22:07.731 clat (usec): min=611, max=60713, avg=11082.53, stdev=13610.06 00:22:07.731 lat (usec): min=618, max=60720, avg=11092.49, stdev=13610.04 00:22:07.731 clat percentiles (usec): 00:22:07.731 | 1.00th=[ 996], 5.00th=[ 1205], 10.00th=[ 1369], 20.00th=[ 1598], 00:22:07.731 | 30.00th=[ 1811], 40.00th=[ 2540], 50.00th=[ 7242], 60.00th=[ 8455], 00:22:07.731 | 70.00th=[ 9896], 80.00th=[12518], 90.00th=[38011], 95.00th=[41157], 00:22:07.731 | 99.00th=[52167], 99.50th=[54264], 99.90th=[56886], 99.95th=[59507], 00:22:07.731 | 99.99th=[60556] 00:22:07.731 bw ( KiB/s): min=15872, max=69272, per=95.02%, avg=43690.67, stdev=12083.02, samples=12 00:22:07.731 iops : min= 3968, max=17318, avg=10922.67, stdev=3020.75, samples=12 00:22:07.731 lat (usec) : 750=0.02%, 1000=0.49% 00:22:07.731 lat (msec) : 2=17.17%, 4=3.28%, 10=14.67%, 20=53.77%, 50=9.81% 00:22:07.731 lat (msec) : 100=0.78% 00:22:07.731 cpu : usr=98.91%, sys=0.29%, ctx=24, majf=0, minf=5565 00:22:07.731 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:22:07.731 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:07.731 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:07.731 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:07.731 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:07.731 00:22:07.731 Run status group 0 (all jobs): 00:22:07.731 READ: bw=28.1MiB/s (29.4MB/s), 28.1MiB/s-28.1MiB/s (29.4MB/s-29.4MB/s), io=255MiB (267MB), run=9070-9070msec 00:22:07.731 WRITE: bw=44.9MiB/s (47.1MB/s), 44.9MiB/s-44.9MiB/s (47.1MB/s-47.1MB/s), io=256MiB (268MB), run=5701-5701msec 00:22:10.258 ----------------------------------------------------- 00:22:10.258 Suppressions used: 00:22:10.258 count bytes template 00:22:10.258 1 5 /usr/src/fio/parse.c 00:22:10.258 2 192 /usr/src/fio/iolog.c 00:22:10.258 1 8 libtcmalloc_minimal.so 00:22:10.258 1 904 libcrypto.so 00:22:10.258 ----------------------------------------------------- 00:22:10.258 00:22:10.258 15:15:10 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:22:10.258 15:15:10 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:10.258 15:15:10 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:10.258 15:15:10 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:10.258 15:15:10 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:22:10.258 Remove shared memory files 00:22:10.258 15:15:10 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:10.258 15:15:10 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:22:10.258 15:15:10 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:22:10.258 15:15:10 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57747 /dev/shm/spdk_tgt_trace.pid76044 00:22:10.258 15:15:10 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:10.258 15:15:10 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:22:10.258 00:22:10.258 real 1m15.259s 00:22:10.258 user 2m41.895s 00:22:10.258 sys 0m5.077s 00:22:10.258 15:15:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:10.258 15:15:10 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:10.258 ************************************ 00:22:10.258 END TEST ftl_fio_basic 00:22:10.258 ************************************ 00:22:10.258 15:15:10 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:22:10.259 15:15:10 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:10.259 15:15:10 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:10.259 15:15:10 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:10.259 ************************************ 00:22:10.259 START TEST ftl_bdevperf 00:22:10.259 ************************************ 00:22:10.259 15:15:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:22:10.518 * Looking for test storage... 00:22:10.518 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:10.518 15:15:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:10.518 15:15:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:22:10.518 15:15:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:10.518 15:15:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:10.518 15:15:11 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:10.518 15:15:11 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:10.518 15:15:11 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:10.518 15:15:11 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:22:10.518 15:15:11 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:22:10.518 15:15:11 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:22:10.518 15:15:11 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:22:10.518 15:15:11 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:22:10.518 15:15:11 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:22:10.518 15:15:11 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:22:10.518 15:15:11 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:10.518 15:15:11 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:22:10.518 15:15:11 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:22:10.518 15:15:11 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:10.518 15:15:11 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:10.518 15:15:11 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:22:10.518 15:15:11 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:22:10.518 15:15:11 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:10.518 15:15:11 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:22:10.518 15:15:11 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:10.518 15:15:11 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:22:10.518 15:15:11 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:22:10.518 15:15:11 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:10.518 15:15:11 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:22:10.518 15:15:11 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:10.518 15:15:11 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:10.518 15:15:11 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:10.518 15:15:11 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:22:10.518 15:15:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:10.518 15:15:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:10.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:10.518 --rc genhtml_branch_coverage=1 00:22:10.518 --rc genhtml_function_coverage=1 00:22:10.518 --rc genhtml_legend=1 00:22:10.518 --rc geninfo_all_blocks=1 00:22:10.518 --rc geninfo_unexecuted_blocks=1 00:22:10.518 00:22:10.518 ' 00:22:10.518 15:15:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:10.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:10.518 --rc genhtml_branch_coverage=1 00:22:10.518 --rc genhtml_function_coverage=1 00:22:10.518 --rc genhtml_legend=1 00:22:10.518 --rc geninfo_all_blocks=1 00:22:10.518 --rc geninfo_unexecuted_blocks=1 00:22:10.518 00:22:10.518 ' 00:22:10.518 15:15:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:10.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:10.518 --rc genhtml_branch_coverage=1 00:22:10.518 --rc genhtml_function_coverage=1 00:22:10.518 --rc genhtml_legend=1 00:22:10.518 --rc geninfo_all_blocks=1 00:22:10.518 --rc geninfo_unexecuted_blocks=1 00:22:10.518 00:22:10.518 ' 00:22:10.518 15:15:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:10.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:10.518 --rc genhtml_branch_coverage=1 00:22:10.518 --rc genhtml_function_coverage=1 00:22:10.518 --rc genhtml_legend=1 00:22:10.518 --rc geninfo_all_blocks=1 00:22:10.518 --rc geninfo_unexecuted_blocks=1 00:22:10.518 00:22:10.518 ' 00:22:10.518 15:15:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:10.518 15:15:11 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:22:10.518 15:15:11 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:10.518 15:15:11 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:10.518 15:15:11 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:10.518 15:15:11 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:10.518 15:15:11 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:10.518 15:15:11 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:10.518 15:15:11 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:10.518 15:15:11 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:10.518 15:15:11 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:10.518 15:15:11 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:10.518 15:15:11 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:10.518 15:15:11 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:10.518 15:15:11 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:10.518 15:15:11 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:10.518 15:15:11 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:10.518 15:15:11 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:10.518 15:15:11 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:10.518 15:15:11 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:10.518 15:15:11 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:10.518 15:15:11 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:10.519 15:15:11 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:10.519 15:15:11 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:10.519 15:15:11 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:10.519 15:15:11 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:10.519 15:15:11 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:10.519 15:15:11 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:10.519 15:15:11 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:10.519 15:15:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:22:10.519 15:15:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:22:10.519 15:15:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:22:10.519 15:15:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:10.519 15:15:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:22:10.519 15:15:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=78101 00:22:10.519 15:15:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:22:10.519 15:15:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 78101 00:22:10.519 15:15:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:22:10.519 15:15:11 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 78101 ']' 00:22:10.519 15:15:11 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:10.519 15:15:11 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:10.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:10.519 15:15:11 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:10.519 15:15:11 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:10.519 15:15:11 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:10.792 [2024-11-20 15:15:11.358650] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:22:10.792 [2024-11-20 15:15:11.359453] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78101 ] 00:22:10.792 [2024-11-20 15:15:11.577187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:11.050 [2024-11-20 15:15:11.727563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:11.616 15:15:12 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:11.616 15:15:12 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:22:11.616 15:15:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:11.616 15:15:12 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:22:11.616 15:15:12 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:11.616 15:15:12 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:22:11.616 15:15:12 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:22:11.616 15:15:12 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:11.874 15:15:12 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:11.874 15:15:12 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:22:11.874 15:15:12 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:11.874 15:15:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:22:11.874 15:15:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:11.874 15:15:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:22:11.874 15:15:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:22:11.874 15:15:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:12.132 15:15:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:12.132 { 00:22:12.132 "name": "nvme0n1", 00:22:12.132 "aliases": [ 00:22:12.132 "02628d80-83ae-4870-833d-e31c2fa3f018" 00:22:12.132 ], 00:22:12.132 "product_name": "NVMe disk", 00:22:12.132 "block_size": 4096, 00:22:12.132 "num_blocks": 1310720, 00:22:12.132 "uuid": "02628d80-83ae-4870-833d-e31c2fa3f018", 00:22:12.132 "numa_id": -1, 00:22:12.132 "assigned_rate_limits": { 00:22:12.132 "rw_ios_per_sec": 0, 00:22:12.132 "rw_mbytes_per_sec": 0, 00:22:12.132 "r_mbytes_per_sec": 0, 00:22:12.132 "w_mbytes_per_sec": 0 00:22:12.132 }, 00:22:12.132 "claimed": true, 00:22:12.132 "claim_type": "read_many_write_one", 00:22:12.132 "zoned": false, 00:22:12.132 "supported_io_types": { 00:22:12.132 "read": true, 00:22:12.132 "write": true, 00:22:12.133 "unmap": true, 00:22:12.133 "flush": true, 00:22:12.133 "reset": true, 00:22:12.133 "nvme_admin": true, 00:22:12.133 "nvme_io": true, 00:22:12.133 "nvme_io_md": false, 00:22:12.133 "write_zeroes": true, 00:22:12.133 "zcopy": false, 00:22:12.133 "get_zone_info": false, 00:22:12.133 "zone_management": false, 00:22:12.133 "zone_append": false, 00:22:12.133 "compare": true, 00:22:12.133 "compare_and_write": false, 00:22:12.133 "abort": true, 00:22:12.133 "seek_hole": false, 00:22:12.133 "seek_data": false, 00:22:12.133 "copy": true, 00:22:12.133 "nvme_iov_md": false 00:22:12.133 }, 00:22:12.133 "driver_specific": { 00:22:12.133 "nvme": [ 00:22:12.133 { 00:22:12.133 "pci_address": "0000:00:11.0", 00:22:12.133 "trid": { 00:22:12.133 "trtype": "PCIe", 00:22:12.133 "traddr": "0000:00:11.0" 00:22:12.133 }, 00:22:12.133 "ctrlr_data": { 00:22:12.133 "cntlid": 0, 00:22:12.133 "vendor_id": "0x1b36", 00:22:12.133 "model_number": "QEMU NVMe Ctrl", 00:22:12.133 "serial_number": "12341", 00:22:12.133 "firmware_revision": "8.0.0", 00:22:12.133 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:12.133 "oacs": { 00:22:12.133 "security": 0, 00:22:12.133 "format": 1, 00:22:12.133 "firmware": 0, 00:22:12.133 "ns_manage": 1 00:22:12.133 }, 00:22:12.133 "multi_ctrlr": false, 00:22:12.133 "ana_reporting": false 00:22:12.133 }, 00:22:12.133 "vs": { 00:22:12.133 "nvme_version": "1.4" 00:22:12.133 }, 00:22:12.133 "ns_data": { 00:22:12.133 "id": 1, 00:22:12.133 "can_share": false 00:22:12.133 } 00:22:12.133 } 00:22:12.133 ], 00:22:12.133 "mp_policy": "active_passive" 00:22:12.133 } 00:22:12.133 } 00:22:12.133 ]' 00:22:12.133 15:15:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:12.133 15:15:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:22:12.133 15:15:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:12.133 15:15:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:22:12.133 15:15:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:22:12.133 15:15:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:22:12.133 15:15:12 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:22:12.133 15:15:12 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:12.133 15:15:12 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:22:12.133 15:15:12 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:12.133 15:15:12 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:12.391 15:15:13 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=4d80554a-372c-456a-941d-8023abfeae3f 00:22:12.391 15:15:13 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:22:12.391 15:15:13 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4d80554a-372c-456a-941d-8023abfeae3f 00:22:12.649 15:15:13 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:12.908 15:15:13 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=acceff51-e065-4d4a-88fa-dc5b898f666d 00:22:12.908 15:15:13 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u acceff51-e065-4d4a-88fa-dc5b898f666d 00:22:13.476 15:15:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=4e35a89a-754b-4310-a1c2-7ea72f0feb43 00:22:13.476 15:15:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 4e35a89a-754b-4310-a1c2-7ea72f0feb43 00:22:13.476 15:15:14 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:22:13.476 15:15:14 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:13.476 15:15:14 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=4e35a89a-754b-4310-a1c2-7ea72f0feb43 00:22:13.476 15:15:14 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:22:13.476 15:15:14 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 4e35a89a-754b-4310-a1c2-7ea72f0feb43 00:22:13.476 15:15:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=4e35a89a-754b-4310-a1c2-7ea72f0feb43 00:22:13.476 15:15:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:13.476 15:15:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:22:13.476 15:15:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:22:13.476 15:15:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4e35a89a-754b-4310-a1c2-7ea72f0feb43 00:22:13.476 15:15:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:13.476 { 00:22:13.476 "name": "4e35a89a-754b-4310-a1c2-7ea72f0feb43", 00:22:13.476 "aliases": [ 00:22:13.476 "lvs/nvme0n1p0" 00:22:13.476 ], 00:22:13.476 "product_name": "Logical Volume", 00:22:13.476 "block_size": 4096, 00:22:13.476 "num_blocks": 26476544, 00:22:13.476 "uuid": "4e35a89a-754b-4310-a1c2-7ea72f0feb43", 00:22:13.476 "assigned_rate_limits": { 00:22:13.476 "rw_ios_per_sec": 0, 00:22:13.476 "rw_mbytes_per_sec": 0, 00:22:13.476 "r_mbytes_per_sec": 0, 00:22:13.476 "w_mbytes_per_sec": 0 00:22:13.476 }, 00:22:13.476 "claimed": false, 00:22:13.476 "zoned": false, 00:22:13.476 "supported_io_types": { 00:22:13.476 "read": true, 00:22:13.476 "write": true, 00:22:13.476 "unmap": true, 00:22:13.476 "flush": false, 00:22:13.476 "reset": true, 00:22:13.476 "nvme_admin": false, 00:22:13.476 "nvme_io": false, 00:22:13.476 "nvme_io_md": false, 00:22:13.476 "write_zeroes": true, 00:22:13.476 "zcopy": false, 00:22:13.476 "get_zone_info": false, 00:22:13.476 "zone_management": false, 00:22:13.476 "zone_append": false, 00:22:13.476 "compare": false, 00:22:13.476 "compare_and_write": false, 00:22:13.476 "abort": false, 00:22:13.476 "seek_hole": true, 00:22:13.476 "seek_data": true, 00:22:13.476 "copy": false, 00:22:13.476 "nvme_iov_md": false 00:22:13.476 }, 00:22:13.476 "driver_specific": { 00:22:13.476 "lvol": { 00:22:13.477 "lvol_store_uuid": "acceff51-e065-4d4a-88fa-dc5b898f666d", 00:22:13.477 "base_bdev": "nvme0n1", 00:22:13.477 "thin_provision": true, 00:22:13.477 "num_allocated_clusters": 0, 00:22:13.477 "snapshot": false, 00:22:13.477 "clone": false, 00:22:13.477 "esnap_clone": false 00:22:13.477 } 00:22:13.477 } 00:22:13.477 } 00:22:13.477 ]' 00:22:13.477 15:15:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:13.477 15:15:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:22:13.736 15:15:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:13.736 15:15:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:13.736 15:15:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:13.736 15:15:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:22:13.736 15:15:14 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:22:13.736 15:15:14 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:22:13.736 15:15:14 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:13.995 15:15:14 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:13.995 15:15:14 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:13.995 15:15:14 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 4e35a89a-754b-4310-a1c2-7ea72f0feb43 00:22:13.995 15:15:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=4e35a89a-754b-4310-a1c2-7ea72f0feb43 00:22:13.995 15:15:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:13.995 15:15:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:22:13.995 15:15:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:22:13.995 15:15:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4e35a89a-754b-4310-a1c2-7ea72f0feb43 00:22:14.253 15:15:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:14.253 { 00:22:14.253 "name": "4e35a89a-754b-4310-a1c2-7ea72f0feb43", 00:22:14.253 "aliases": [ 00:22:14.253 "lvs/nvme0n1p0" 00:22:14.253 ], 00:22:14.253 "product_name": "Logical Volume", 00:22:14.253 "block_size": 4096, 00:22:14.253 "num_blocks": 26476544, 00:22:14.253 "uuid": "4e35a89a-754b-4310-a1c2-7ea72f0feb43", 00:22:14.253 "assigned_rate_limits": { 00:22:14.253 "rw_ios_per_sec": 0, 00:22:14.253 "rw_mbytes_per_sec": 0, 00:22:14.253 "r_mbytes_per_sec": 0, 00:22:14.253 "w_mbytes_per_sec": 0 00:22:14.253 }, 00:22:14.253 "claimed": false, 00:22:14.253 "zoned": false, 00:22:14.253 "supported_io_types": { 00:22:14.253 "read": true, 00:22:14.253 "write": true, 00:22:14.253 "unmap": true, 00:22:14.253 "flush": false, 00:22:14.253 "reset": true, 00:22:14.253 "nvme_admin": false, 00:22:14.253 "nvme_io": false, 00:22:14.253 "nvme_io_md": false, 00:22:14.253 "write_zeroes": true, 00:22:14.253 "zcopy": false, 00:22:14.253 "get_zone_info": false, 00:22:14.253 "zone_management": false, 00:22:14.253 "zone_append": false, 00:22:14.253 "compare": false, 00:22:14.253 "compare_and_write": false, 00:22:14.253 "abort": false, 00:22:14.253 "seek_hole": true, 00:22:14.253 "seek_data": true, 00:22:14.253 "copy": false, 00:22:14.253 "nvme_iov_md": false 00:22:14.253 }, 00:22:14.253 "driver_specific": { 00:22:14.253 "lvol": { 00:22:14.253 "lvol_store_uuid": "acceff51-e065-4d4a-88fa-dc5b898f666d", 00:22:14.253 "base_bdev": "nvme0n1", 00:22:14.253 "thin_provision": true, 00:22:14.253 "num_allocated_clusters": 0, 00:22:14.253 "snapshot": false, 00:22:14.253 "clone": false, 00:22:14.253 "esnap_clone": false 00:22:14.253 } 00:22:14.253 } 00:22:14.253 } 00:22:14.253 ]' 00:22:14.253 15:15:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:14.253 15:15:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:22:14.254 15:15:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:14.254 15:15:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:14.254 15:15:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:14.254 15:15:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:22:14.254 15:15:14 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:22:14.254 15:15:14 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:14.512 15:15:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:22:14.512 15:15:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 4e35a89a-754b-4310-a1c2-7ea72f0feb43 00:22:14.512 15:15:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=4e35a89a-754b-4310-a1c2-7ea72f0feb43 00:22:14.512 15:15:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:14.512 15:15:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:22:14.512 15:15:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:22:14.512 15:15:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4e35a89a-754b-4310-a1c2-7ea72f0feb43 00:22:14.770 15:15:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:14.770 { 00:22:14.770 "name": "4e35a89a-754b-4310-a1c2-7ea72f0feb43", 00:22:14.770 "aliases": [ 00:22:14.770 "lvs/nvme0n1p0" 00:22:14.770 ], 00:22:14.770 "product_name": "Logical Volume", 00:22:14.770 "block_size": 4096, 00:22:14.770 "num_blocks": 26476544, 00:22:14.770 "uuid": "4e35a89a-754b-4310-a1c2-7ea72f0feb43", 00:22:14.770 "assigned_rate_limits": { 00:22:14.770 "rw_ios_per_sec": 0, 00:22:14.770 "rw_mbytes_per_sec": 0, 00:22:14.770 "r_mbytes_per_sec": 0, 00:22:14.770 "w_mbytes_per_sec": 0 00:22:14.770 }, 00:22:14.770 "claimed": false, 00:22:14.770 "zoned": false, 00:22:14.770 "supported_io_types": { 00:22:14.770 "read": true, 00:22:14.770 "write": true, 00:22:14.770 "unmap": true, 00:22:14.770 "flush": false, 00:22:14.770 "reset": true, 00:22:14.770 "nvme_admin": false, 00:22:14.770 "nvme_io": false, 00:22:14.770 "nvme_io_md": false, 00:22:14.770 "write_zeroes": true, 00:22:14.770 "zcopy": false, 00:22:14.770 "get_zone_info": false, 00:22:14.770 "zone_management": false, 00:22:14.770 "zone_append": false, 00:22:14.770 "compare": false, 00:22:14.770 "compare_and_write": false, 00:22:14.770 "abort": false, 00:22:14.770 "seek_hole": true, 00:22:14.770 "seek_data": true, 00:22:14.770 "copy": false, 00:22:14.770 "nvme_iov_md": false 00:22:14.770 }, 00:22:14.770 "driver_specific": { 00:22:14.770 "lvol": { 00:22:14.770 "lvol_store_uuid": "acceff51-e065-4d4a-88fa-dc5b898f666d", 00:22:14.770 "base_bdev": "nvme0n1", 00:22:14.770 "thin_provision": true, 00:22:14.770 "num_allocated_clusters": 0, 00:22:14.770 "snapshot": false, 00:22:14.770 "clone": false, 00:22:14.770 "esnap_clone": false 00:22:14.770 } 00:22:14.770 } 00:22:14.770 } 00:22:14.770 ]' 00:22:14.770 15:15:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:14.770 15:15:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:22:14.770 15:15:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:14.770 15:15:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:14.770 15:15:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:14.770 15:15:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:22:14.770 15:15:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:22:14.770 15:15:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 4e35a89a-754b-4310-a1c2-7ea72f0feb43 -c nvc0n1p0 --l2p_dram_limit 20 00:22:15.029 [2024-11-20 15:15:15.742613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.029 [2024-11-20 15:15:15.742700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:15.029 [2024-11-20 15:15:15.742732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:15.029 [2024-11-20 15:15:15.742747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.029 [2024-11-20 15:15:15.742831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.029 [2024-11-20 15:15:15.742851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:15.029 [2024-11-20 15:15:15.742863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:22:15.029 [2024-11-20 15:15:15.742876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.029 [2024-11-20 15:15:15.742898] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:15.029 [2024-11-20 15:15:15.744065] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:15.029 [2024-11-20 15:15:15.744099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.029 [2024-11-20 15:15:15.744115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:15.029 [2024-11-20 15:15:15.744127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.209 ms 00:22:15.029 [2024-11-20 15:15:15.744140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.029 [2024-11-20 15:15:15.744228] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 598836d9-4fd3-4371-9434-6919de9c6df3 00:22:15.029 [2024-11-20 15:15:15.746642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.029 [2024-11-20 15:15:15.746686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:15.029 [2024-11-20 15:15:15.746704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:22:15.029 [2024-11-20 15:15:15.746746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.029 [2024-11-20 15:15:15.760414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.029 [2024-11-20 15:15:15.760471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:15.029 [2024-11-20 15:15:15.760492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.595 ms 00:22:15.029 [2024-11-20 15:15:15.760505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.029 [2024-11-20 15:15:15.760659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.029 [2024-11-20 15:15:15.760677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:15.029 [2024-11-20 15:15:15.760697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:22:15.029 [2024-11-20 15:15:15.760707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.029 [2024-11-20 15:15:15.760817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.029 [2024-11-20 15:15:15.760830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:15.029 [2024-11-20 15:15:15.760846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:15.029 [2024-11-20 15:15:15.760856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.029 [2024-11-20 15:15:15.760891] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:15.029 [2024-11-20 15:15:15.767474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.029 [2024-11-20 15:15:15.767549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:15.029 [2024-11-20 15:15:15.767563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.608 ms 00:22:15.029 [2024-11-20 15:15:15.767584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.029 [2024-11-20 15:15:15.767629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.029 [2024-11-20 15:15:15.767644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:15.029 [2024-11-20 15:15:15.767656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:15.029 [2024-11-20 15:15:15.767669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.029 [2024-11-20 15:15:15.767714] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:15.029 [2024-11-20 15:15:15.767882] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:15.029 [2024-11-20 15:15:15.767898] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:15.029 [2024-11-20 15:15:15.767916] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:15.029 [2024-11-20 15:15:15.767930] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:15.029 [2024-11-20 15:15:15.767947] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:15.029 [2024-11-20 15:15:15.767959] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:15.029 [2024-11-20 15:15:15.767973] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:15.029 [2024-11-20 15:15:15.767983] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:15.029 [2024-11-20 15:15:15.767997] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:15.029 [2024-11-20 15:15:15.768009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.029 [2024-11-20 15:15:15.768028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:15.029 [2024-11-20 15:15:15.768040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:22:15.029 [2024-11-20 15:15:15.768054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.029 [2024-11-20 15:15:15.768128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.029 [2024-11-20 15:15:15.768144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:15.029 [2024-11-20 15:15:15.768155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:22:15.029 [2024-11-20 15:15:15.768172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.029 [2024-11-20 15:15:15.768258] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:15.029 [2024-11-20 15:15:15.768273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:15.029 [2024-11-20 15:15:15.768287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:15.029 [2024-11-20 15:15:15.768302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:15.029 [2024-11-20 15:15:15.768313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:15.029 [2024-11-20 15:15:15.768326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:15.029 [2024-11-20 15:15:15.768335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:15.029 [2024-11-20 15:15:15.768348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:15.029 [2024-11-20 15:15:15.768357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:15.029 [2024-11-20 15:15:15.768369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:15.029 [2024-11-20 15:15:15.768378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:15.029 [2024-11-20 15:15:15.768391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:15.029 [2024-11-20 15:15:15.768400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:15.029 [2024-11-20 15:15:15.768427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:15.029 [2024-11-20 15:15:15.768437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:15.029 [2024-11-20 15:15:15.768453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:15.029 [2024-11-20 15:15:15.768462] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:15.029 [2024-11-20 15:15:15.768475] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:15.029 [2024-11-20 15:15:15.768484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:15.029 [2024-11-20 15:15:15.768498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:15.029 [2024-11-20 15:15:15.768508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:15.029 [2024-11-20 15:15:15.768521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:15.029 [2024-11-20 15:15:15.768533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:15.029 [2024-11-20 15:15:15.768546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:15.029 [2024-11-20 15:15:15.768556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:15.029 [2024-11-20 15:15:15.768586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:15.030 [2024-11-20 15:15:15.768596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:15.030 [2024-11-20 15:15:15.768610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:15.030 [2024-11-20 15:15:15.768620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:15.030 [2024-11-20 15:15:15.768633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:15.030 [2024-11-20 15:15:15.768643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:15.030 [2024-11-20 15:15:15.768660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:15.030 [2024-11-20 15:15:15.768670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:15.030 [2024-11-20 15:15:15.768682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:15.030 [2024-11-20 15:15:15.768692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:15.030 [2024-11-20 15:15:15.768704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:15.030 [2024-11-20 15:15:15.768714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:15.030 [2024-11-20 15:15:15.768727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:15.030 [2024-11-20 15:15:15.768747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:15.030 [2024-11-20 15:15:15.768761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:15.030 [2024-11-20 15:15:15.768771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:15.030 [2024-11-20 15:15:15.768784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:15.030 [2024-11-20 15:15:15.768794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:15.030 [2024-11-20 15:15:15.768807] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:15.030 [2024-11-20 15:15:15.768818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:15.030 [2024-11-20 15:15:15.768832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:15.030 [2024-11-20 15:15:15.768843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:15.030 [2024-11-20 15:15:15.768863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:15.030 [2024-11-20 15:15:15.768873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:15.030 [2024-11-20 15:15:15.768886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:15.030 [2024-11-20 15:15:15.768897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:15.030 [2024-11-20 15:15:15.768910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:15.030 [2024-11-20 15:15:15.768920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:15.030 [2024-11-20 15:15:15.768938] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:15.030 [2024-11-20 15:15:15.768954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:15.030 [2024-11-20 15:15:15.768970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:15.030 [2024-11-20 15:15:15.768982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:15.030 [2024-11-20 15:15:15.768997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:15.030 [2024-11-20 15:15:15.769009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:15.030 [2024-11-20 15:15:15.769025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:15.030 [2024-11-20 15:15:15.769036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:15.030 [2024-11-20 15:15:15.769051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:15.030 [2024-11-20 15:15:15.769062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:15.030 [2024-11-20 15:15:15.769080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:15.030 [2024-11-20 15:15:15.769091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:15.030 [2024-11-20 15:15:15.769105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:15.030 [2024-11-20 15:15:15.769116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:15.030 [2024-11-20 15:15:15.769130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:15.030 [2024-11-20 15:15:15.769142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:15.030 [2024-11-20 15:15:15.769156] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:15.030 [2024-11-20 15:15:15.769168] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:15.030 [2024-11-20 15:15:15.769185] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:15.030 [2024-11-20 15:15:15.769196] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:15.030 [2024-11-20 15:15:15.769210] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:15.030 [2024-11-20 15:15:15.769221] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:15.030 [2024-11-20 15:15:15.769236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.030 [2024-11-20 15:15:15.769251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:15.030 [2024-11-20 15:15:15.769265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.030 ms 00:22:15.030 [2024-11-20 15:15:15.769276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.030 [2024-11-20 15:15:15.769328] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:15.030 [2024-11-20 15:15:15.769342] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:18.332 [2024-11-20 15:15:18.808498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.332 [2024-11-20 15:15:18.808597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:18.332 [2024-11-20 15:15:18.808628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3044.100 ms 00:22:18.332 [2024-11-20 15:15:18.808640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.332 [2024-11-20 15:15:18.858046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.332 [2024-11-20 15:15:18.858128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:18.332 [2024-11-20 15:15:18.858152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.073 ms 00:22:18.332 [2024-11-20 15:15:18.858181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.332 [2024-11-20 15:15:18.858433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.332 [2024-11-20 15:15:18.858450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:18.332 [2024-11-20 15:15:18.858471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:22:18.332 [2024-11-20 15:15:18.858482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.332 [2024-11-20 15:15:18.923939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.332 [2024-11-20 15:15:18.924019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:18.332 [2024-11-20 15:15:18.924041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.506 ms 00:22:18.332 [2024-11-20 15:15:18.924054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.332 [2024-11-20 15:15:18.924130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.332 [2024-11-20 15:15:18.924147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:18.332 [2024-11-20 15:15:18.924163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:18.332 [2024-11-20 15:15:18.924174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.332 [2024-11-20 15:15:18.925063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.332 [2024-11-20 15:15:18.925081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:18.332 [2024-11-20 15:15:18.925096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.764 ms 00:22:18.332 [2024-11-20 15:15:18.925107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.332 [2024-11-20 15:15:18.925246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.332 [2024-11-20 15:15:18.925261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:18.333 [2024-11-20 15:15:18.925279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:22:18.333 [2024-11-20 15:15:18.925289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.333 [2024-11-20 15:15:18.948513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.333 [2024-11-20 15:15:18.948588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:18.333 [2024-11-20 15:15:18.948611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.234 ms 00:22:18.333 [2024-11-20 15:15:18.948623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.333 [2024-11-20 15:15:18.964478] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:22:18.333 [2024-11-20 15:15:18.973694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.333 [2024-11-20 15:15:18.973760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:18.333 [2024-11-20 15:15:18.973778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.948 ms 00:22:18.333 [2024-11-20 15:15:18.973810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.333 [2024-11-20 15:15:19.059864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.333 [2024-11-20 15:15:19.059973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:18.333 [2024-11-20 15:15:19.059993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.126 ms 00:22:18.333 [2024-11-20 15:15:19.060009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.333 [2024-11-20 15:15:19.060286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.333 [2024-11-20 15:15:19.060311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:18.333 [2024-11-20 15:15:19.060324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.173 ms 00:22:18.333 [2024-11-20 15:15:19.060338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.333 [2024-11-20 15:15:19.103520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.333 [2024-11-20 15:15:19.103599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:18.333 [2024-11-20 15:15:19.103619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.176 ms 00:22:18.333 [2024-11-20 15:15:19.103633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.333 [2024-11-20 15:15:19.143207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.333 [2024-11-20 15:15:19.143289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:18.333 [2024-11-20 15:15:19.143309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.558 ms 00:22:18.333 [2024-11-20 15:15:19.143323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.333 [2024-11-20 15:15:19.144108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.333 [2024-11-20 15:15:19.144137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:18.333 [2024-11-20 15:15:19.144150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.723 ms 00:22:18.333 [2024-11-20 15:15:19.144165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.593 [2024-11-20 15:15:19.252605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.593 [2024-11-20 15:15:19.252702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:18.593 [2024-11-20 15:15:19.252733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 108.522 ms 00:22:18.593 [2024-11-20 15:15:19.252749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.593 [2024-11-20 15:15:19.295693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.593 [2024-11-20 15:15:19.295790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:18.593 [2024-11-20 15:15:19.295814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.870 ms 00:22:18.593 [2024-11-20 15:15:19.295828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.593 [2024-11-20 15:15:19.337685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.593 [2024-11-20 15:15:19.337789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:18.593 [2024-11-20 15:15:19.337808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.855 ms 00:22:18.593 [2024-11-20 15:15:19.337826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.593 [2024-11-20 15:15:19.379666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.593 [2024-11-20 15:15:19.379758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:18.593 [2024-11-20 15:15:19.379778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.836 ms 00:22:18.593 [2024-11-20 15:15:19.379792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.593 [2024-11-20 15:15:19.379867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.593 [2024-11-20 15:15:19.379889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:18.593 [2024-11-20 15:15:19.379902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:18.593 [2024-11-20 15:15:19.379917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.593 [2024-11-20 15:15:19.380075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.593 [2024-11-20 15:15:19.380094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:18.593 [2024-11-20 15:15:19.380106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:22:18.593 [2024-11-20 15:15:19.380120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.593 [2024-11-20 15:15:19.381596] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3644.292 ms, result 0 00:22:18.593 { 00:22:18.593 "name": "ftl0", 00:22:18.593 "uuid": "598836d9-4fd3-4371-9434-6919de9c6df3" 00:22:18.593 } 00:22:18.593 15:15:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:22:18.593 15:15:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:22:18.593 15:15:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:22:18.852 15:15:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:22:19.111 [2024-11-20 15:15:19.745445] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:22:19.111 I/O size of 69632 is greater than zero copy threshold (65536). 00:22:19.111 Zero copy mechanism will not be used. 00:22:19.111 Running I/O for 4 seconds... 00:22:20.982 1778.00 IOPS, 118.07 MiB/s [2024-11-20T15:15:22.755Z] 1811.50 IOPS, 120.29 MiB/s [2024-11-20T15:15:24.132Z] 1861.33 IOPS, 123.60 MiB/s [2024-11-20T15:15:24.132Z] 1900.00 IOPS, 126.17 MiB/s 00:22:23.296 Latency(us) 00:22:23.296 [2024-11-20T15:15:24.132Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.296 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:22:23.296 ftl0 : 4.00 1899.45 126.14 0.00 0.00 553.86 213.85 2316.13 00:22:23.296 [2024-11-20T15:15:24.133Z] =================================================================================================================== 00:22:23.297 [2024-11-20T15:15:24.133Z] Total : 1899.45 126.14 0.00 0.00 553.86 213.85 2316.13 00:22:23.297 [2024-11-20 15:15:23.751641] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:22:23.297 { 00:22:23.297 "results": [ 00:22:23.297 { 00:22:23.297 "job": "ftl0", 00:22:23.297 "core_mask": "0x1", 00:22:23.297 "workload": "randwrite", 00:22:23.297 "status": "finished", 00:22:23.297 "queue_depth": 1, 00:22:23.297 "io_size": 69632, 00:22:23.297 "runtime": 4.001676, 00:22:23.297 "iops": 1899.4541287200664, 00:22:23.297 "mibps": 126.1356257353169, 00:22:23.297 "io_failed": 0, 00:22:23.297 "io_timeout": 0, 00:22:23.297 "avg_latency_us": 553.8587442256859, 00:22:23.297 "min_latency_us": 213.84738955823292, 00:22:23.297 "max_latency_us": 2316.1317269076303 00:22:23.297 } 00:22:23.297 ], 00:22:23.297 "core_count": 1 00:22:23.297 } 00:22:23.297 15:15:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:22:23.297 [2024-11-20 15:15:23.898529] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:22:23.297 Running I/O for 4 seconds... 00:22:25.169 9858.00 IOPS, 38.51 MiB/s [2024-11-20T15:15:26.983Z] 9835.50 IOPS, 38.42 MiB/s [2024-11-20T15:15:27.941Z] 9891.33 IOPS, 38.64 MiB/s [2024-11-20T15:15:27.941Z] 9638.25 IOPS, 37.65 MiB/s 00:22:27.105 Latency(us) 00:22:27.105 [2024-11-20T15:15:27.941Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:27.105 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:22:27.105 ftl0 : 4.03 9593.16 37.47 0.00 0.00 13283.27 251.68 41058.70 00:22:27.105 [2024-11-20T15:15:27.941Z] =================================================================================================================== 00:22:27.105 [2024-11-20T15:15:27.941Z] Total : 9593.16 37.47 0.00 0.00 13283.27 0.00 41058.70 00:22:27.362 [2024-11-20 15:15:27.939395] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:22:27.362 { 00:22:27.362 "results": [ 00:22:27.362 { 00:22:27.362 "job": "ftl0", 00:22:27.362 "core_mask": "0x1", 00:22:27.362 "workload": "randwrite", 00:22:27.362 "status": "finished", 00:22:27.362 "queue_depth": 128, 00:22:27.362 "io_size": 4096, 00:22:27.362 "runtime": 4.032143, 00:22:27.362 "iops": 9593.16175046371, 00:22:27.362 "mibps": 37.47328808774887, 00:22:27.362 "io_failed": 0, 00:22:27.362 "io_timeout": 0, 00:22:27.362 "avg_latency_us": 13283.26626509139, 00:22:27.362 "min_latency_us": 251.68192771084338, 00:22:27.362 "max_latency_us": 41058.698795180724 00:22:27.362 } 00:22:27.362 ], 00:22:27.362 "core_count": 1 00:22:27.362 } 00:22:27.362 15:15:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:22:27.362 [2024-11-20 15:15:28.058586] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:22:27.362 Running I/O for 4 seconds... 00:22:29.239 7528.00 IOPS, 29.41 MiB/s [2024-11-20T15:15:31.450Z] 7582.50 IOPS, 29.62 MiB/s [2024-11-20T15:15:32.385Z] 7719.33 IOPS, 30.15 MiB/s [2024-11-20T15:15:32.385Z] 7745.00 IOPS, 30.25 MiB/s 00:22:31.549 Latency(us) 00:22:31.549 [2024-11-20T15:15:32.385Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:31.549 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:31.549 Verification LBA range: start 0x0 length 0x1400000 00:22:31.549 ftl0 : 4.01 7753.40 30.29 0.00 0.00 16457.37 284.58 31162.50 00:22:31.549 [2024-11-20T15:15:32.385Z] =================================================================================================================== 00:22:31.549 [2024-11-20T15:15:32.385Z] Total : 7753.40 30.29 0.00 0.00 16457.37 0.00 31162.50 00:22:31.549 [2024-11-20 15:15:32.085190] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:22:31.549 { 00:22:31.549 "results": [ 00:22:31.549 { 00:22:31.549 "job": "ftl0", 00:22:31.549 "core_mask": "0x1", 00:22:31.549 "workload": "verify", 00:22:31.549 "status": "finished", 00:22:31.549 "verify_range": { 00:22:31.549 "start": 0, 00:22:31.549 "length": 20971520 00:22:31.549 }, 00:22:31.549 "queue_depth": 128, 00:22:31.549 "io_size": 4096, 00:22:31.549 "runtime": 4.010888, 00:22:31.549 "iops": 7753.395258107432, 00:22:31.549 "mibps": 30.286700226982155, 00:22:31.549 "io_failed": 0, 00:22:31.549 "io_timeout": 0, 00:22:31.549 "avg_latency_us": 16457.36881272598, 00:22:31.549 "min_latency_us": 284.58152610441766, 00:22:31.549 "max_latency_us": 31162.499598393573 00:22:31.549 } 00:22:31.549 ], 00:22:31.549 "core_count": 1 00:22:31.549 } 00:22:31.549 15:15:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:22:31.549 [2024-11-20 15:15:32.315136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.549 [2024-11-20 15:15:32.315225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:31.549 [2024-11-20 15:15:32.315245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:31.549 [2024-11-20 15:15:32.315259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.549 [2024-11-20 15:15:32.315288] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:31.549 [2024-11-20 15:15:32.320008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.549 [2024-11-20 15:15:32.320046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:31.549 [2024-11-20 15:15:32.320067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.700 ms 00:22:31.549 [2024-11-20 15:15:32.320078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.549 [2024-11-20 15:15:32.321964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.549 [2024-11-20 15:15:32.322006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:31.549 [2024-11-20 15:15:32.322028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.851 ms 00:22:31.549 [2024-11-20 15:15:32.322044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.807 [2024-11-20 15:15:32.540706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.807 [2024-11-20 15:15:32.540798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:31.807 [2024-11-20 15:15:32.540828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 218.961 ms 00:22:31.807 [2024-11-20 15:15:32.540841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.807 [2024-11-20 15:15:32.546121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.807 [2024-11-20 15:15:32.546168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:31.807 [2024-11-20 15:15:32.546187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.223 ms 00:22:31.807 [2024-11-20 15:15:32.546198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.808 [2024-11-20 15:15:32.585184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.808 [2024-11-20 15:15:32.585259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:31.808 [2024-11-20 15:15:32.585281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.965 ms 00:22:31.808 [2024-11-20 15:15:32.585293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.808 [2024-11-20 15:15:32.609305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.808 [2024-11-20 15:15:32.609390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:31.808 [2024-11-20 15:15:32.609415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.975 ms 00:22:31.808 [2024-11-20 15:15:32.609426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.808 [2024-11-20 15:15:32.609642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.808 [2024-11-20 15:15:32.609659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:31.808 [2024-11-20 15:15:32.609679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:22:31.808 [2024-11-20 15:15:32.609690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.066 [2024-11-20 15:15:32.648675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.066 [2024-11-20 15:15:32.648754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:32.066 [2024-11-20 15:15:32.648776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.017 ms 00:22:32.066 [2024-11-20 15:15:32.648788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.066 [2024-11-20 15:15:32.686876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.066 [2024-11-20 15:15:32.686953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:32.066 [2024-11-20 15:15:32.686975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.074 ms 00:22:32.067 [2024-11-20 15:15:32.686987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.067 [2024-11-20 15:15:32.725755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.067 [2024-11-20 15:15:32.725848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:32.067 [2024-11-20 15:15:32.725873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.753 ms 00:22:32.067 [2024-11-20 15:15:32.725884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.067 [2024-11-20 15:15:32.762992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.067 [2024-11-20 15:15:32.763058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:32.067 [2024-11-20 15:15:32.763084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.012 ms 00:22:32.067 [2024-11-20 15:15:32.763095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.067 [2024-11-20 15:15:32.763149] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:32.067 [2024-11-20 15:15:32.763172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.763189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.763201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.763215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.763226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.763241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.763253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.763268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.763279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.763293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.763304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.763319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.763330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.763348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.763359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.763376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.763387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.763402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.763413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.763430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.763441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.763456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.763467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.763482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.763493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.763507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.763518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.763532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.763543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.763560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.763572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.763587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.763598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.763612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.763623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.763638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.763649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.763663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.763674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.763688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.763712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.763736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.763747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.763761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.763774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.763793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.763805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.763819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.763830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.763843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.763855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.763870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.763880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.763894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.763905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.763919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.763929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.763943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.763954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.763968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.763978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.763995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.764008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.764022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.764033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.764047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.764058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.764072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.764082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.764097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.764109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.764124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.764136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.764150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.764160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.764175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.764186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.764203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:32.067 [2024-11-20 15:15:32.764214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:32.068 [2024-11-20 15:15:32.764228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:32.068 [2024-11-20 15:15:32.764239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:32.068 [2024-11-20 15:15:32.764253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:32.068 [2024-11-20 15:15:32.764264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:32.068 [2024-11-20 15:15:32.764278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:32.068 [2024-11-20 15:15:32.764288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:32.068 [2024-11-20 15:15:32.764303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:32.068 [2024-11-20 15:15:32.764314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:32.068 [2024-11-20 15:15:32.764328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:32.068 [2024-11-20 15:15:32.764338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:32.068 [2024-11-20 15:15:32.764353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:32.068 [2024-11-20 15:15:32.764364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:32.068 [2024-11-20 15:15:32.764378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:32.068 [2024-11-20 15:15:32.764389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:32.068 [2024-11-20 15:15:32.764407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:32.068 [2024-11-20 15:15:32.764418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:32.068 [2024-11-20 15:15:32.764432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:32.068 [2024-11-20 15:15:32.764443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:32.068 [2024-11-20 15:15:32.764459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:32.068 [2024-11-20 15:15:32.764470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:32.068 [2024-11-20 15:15:32.764484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:32.068 [2024-11-20 15:15:32.764503] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:32.068 [2024-11-20 15:15:32.764518] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 598836d9-4fd3-4371-9434-6919de9c6df3 00:22:32.068 [2024-11-20 15:15:32.764529] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:32.068 [2024-11-20 15:15:32.764548] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:32.068 [2024-11-20 15:15:32.764558] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:32.068 [2024-11-20 15:15:32.764573] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:32.068 [2024-11-20 15:15:32.764583] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:32.068 [2024-11-20 15:15:32.764597] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:32.068 [2024-11-20 15:15:32.764608] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:32.068 [2024-11-20 15:15:32.764623] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:32.068 [2024-11-20 15:15:32.764632] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:32.068 [2024-11-20 15:15:32.764646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.068 [2024-11-20 15:15:32.764656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:32.068 [2024-11-20 15:15:32.764670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.501 ms 00:22:32.068 [2024-11-20 15:15:32.764681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.068 [2024-11-20 15:15:32.786433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.068 [2024-11-20 15:15:32.786494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:32.068 [2024-11-20 15:15:32.786514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.705 ms 00:22:32.068 [2024-11-20 15:15:32.786525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.068 [2024-11-20 15:15:32.787153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.068 [2024-11-20 15:15:32.787177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:32.068 [2024-11-20 15:15:32.787192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.588 ms 00:22:32.068 [2024-11-20 15:15:32.787203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.068 [2024-11-20 15:15:32.845940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:32.068 [2024-11-20 15:15:32.846023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:32.068 [2024-11-20 15:15:32.846050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:32.068 [2024-11-20 15:15:32.846062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.068 [2024-11-20 15:15:32.846168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:32.068 [2024-11-20 15:15:32.846179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:32.068 [2024-11-20 15:15:32.846193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:32.068 [2024-11-20 15:15:32.846203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.068 [2024-11-20 15:15:32.846355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:32.068 [2024-11-20 15:15:32.846370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:32.068 [2024-11-20 15:15:32.846385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:32.068 [2024-11-20 15:15:32.846396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.068 [2024-11-20 15:15:32.846419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:32.068 [2024-11-20 15:15:32.846446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:32.068 [2024-11-20 15:15:32.846460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:32.068 [2024-11-20 15:15:32.846471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.326 [2024-11-20 15:15:32.981246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:32.326 [2024-11-20 15:15:32.981321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:32.326 [2024-11-20 15:15:32.981347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:32.326 [2024-11-20 15:15:32.981358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.326 [2024-11-20 15:15:33.089635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:32.326 [2024-11-20 15:15:33.089759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:32.326 [2024-11-20 15:15:33.089783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:32.326 [2024-11-20 15:15:33.089795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.326 [2024-11-20 15:15:33.089963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:32.326 [2024-11-20 15:15:33.089982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:32.326 [2024-11-20 15:15:33.089998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:32.326 [2024-11-20 15:15:33.090010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.326 [2024-11-20 15:15:33.090081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:32.326 [2024-11-20 15:15:33.090095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:32.326 [2024-11-20 15:15:33.090110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:32.326 [2024-11-20 15:15:33.090122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.326 [2024-11-20 15:15:33.090294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:32.327 [2024-11-20 15:15:33.090309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:32.327 [2024-11-20 15:15:33.090334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:32.327 [2024-11-20 15:15:33.090345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.327 [2024-11-20 15:15:33.090393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:32.327 [2024-11-20 15:15:33.090407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:32.327 [2024-11-20 15:15:33.090422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:32.327 [2024-11-20 15:15:33.090433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.327 [2024-11-20 15:15:33.090485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:32.327 [2024-11-20 15:15:33.090513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:32.327 [2024-11-20 15:15:33.090532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:32.327 [2024-11-20 15:15:33.090543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.327 [2024-11-20 15:15:33.090602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:32.327 [2024-11-20 15:15:33.090629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:32.327 [2024-11-20 15:15:33.090645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:32.327 [2024-11-20 15:15:33.090666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.327 [2024-11-20 15:15:33.090854] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 776.909 ms, result 0 00:22:32.327 true 00:22:32.327 15:15:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 78101 00:22:32.327 15:15:33 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 78101 ']' 00:22:32.327 15:15:33 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 78101 00:22:32.327 15:15:33 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:22:32.327 15:15:33 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:32.327 15:15:33 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78101 00:22:32.585 15:15:33 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:32.585 15:15:33 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:32.585 killing process with pid 78101 00:22:32.585 15:15:33 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78101' 00:22:32.585 15:15:33 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 78101 00:22:32.585 Received shutdown signal, test time was about 4.000000 seconds 00:22:32.585 00:22:32.585 Latency(us) 00:22:32.585 [2024-11-20T15:15:33.421Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.585 [2024-11-20T15:15:33.421Z] =================================================================================================================== 00:22:32.585 [2024-11-20T15:15:33.421Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:32.585 15:15:33 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 78101 00:22:36.775 15:15:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:22:36.775 Remove shared memory files 00:22:36.775 15:15:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:22:36.775 15:15:37 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:36.775 15:15:37 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:22:36.775 15:15:37 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:22:36.775 15:15:37 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:22:36.775 15:15:37 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:36.775 15:15:37 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:22:36.775 00:22:36.775 real 0m26.055s 00:22:36.775 user 0m28.846s 00:22:36.775 sys 0m1.631s 00:22:36.775 15:15:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:36.775 ************************************ 00:22:36.775 15:15:37 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:36.775 END TEST ftl_bdevperf 00:22:36.775 ************************************ 00:22:36.775 15:15:37 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:22:36.775 15:15:37 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:36.775 15:15:37 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:36.775 15:15:37 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:36.775 ************************************ 00:22:36.775 START TEST ftl_trim 00:22:36.775 ************************************ 00:22:36.775 15:15:37 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:22:36.775 * Looking for test storage... 00:22:36.775 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:36.775 15:15:37 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:36.775 15:15:37 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:22:36.775 15:15:37 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:36.775 15:15:37 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:36.775 15:15:37 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:36.775 15:15:37 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:36.775 15:15:37 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:36.775 15:15:37 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:22:36.775 15:15:37 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:22:36.776 15:15:37 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:22:36.776 15:15:37 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:22:36.776 15:15:37 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:22:36.776 15:15:37 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:22:36.776 15:15:37 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:22:36.776 15:15:37 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:36.776 15:15:37 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:22:36.776 15:15:37 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:22:36.776 15:15:37 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:36.776 15:15:37 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:36.776 15:15:37 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:22:36.776 15:15:37 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:22:36.776 15:15:37 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:36.776 15:15:37 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:22:36.776 15:15:37 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:22:36.776 15:15:37 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:22:36.776 15:15:37 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:22:36.776 15:15:37 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:36.776 15:15:37 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:22:36.776 15:15:37 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:22:36.776 15:15:37 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:36.776 15:15:37 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:36.776 15:15:37 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:22:36.776 15:15:37 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:36.776 15:15:37 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:36.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:36.776 --rc genhtml_branch_coverage=1 00:22:36.776 --rc genhtml_function_coverage=1 00:22:36.776 --rc genhtml_legend=1 00:22:36.776 --rc geninfo_all_blocks=1 00:22:36.776 --rc geninfo_unexecuted_blocks=1 00:22:36.776 00:22:36.776 ' 00:22:36.776 15:15:37 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:36.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:36.776 --rc genhtml_branch_coverage=1 00:22:36.776 --rc genhtml_function_coverage=1 00:22:36.776 --rc genhtml_legend=1 00:22:36.776 --rc geninfo_all_blocks=1 00:22:36.776 --rc geninfo_unexecuted_blocks=1 00:22:36.776 00:22:36.776 ' 00:22:36.776 15:15:37 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:36.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:36.776 --rc genhtml_branch_coverage=1 00:22:36.776 --rc genhtml_function_coverage=1 00:22:36.776 --rc genhtml_legend=1 00:22:36.776 --rc geninfo_all_blocks=1 00:22:36.776 --rc geninfo_unexecuted_blocks=1 00:22:36.776 00:22:36.776 ' 00:22:36.776 15:15:37 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:36.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:36.776 --rc genhtml_branch_coverage=1 00:22:36.776 --rc genhtml_function_coverage=1 00:22:36.776 --rc genhtml_legend=1 00:22:36.776 --rc geninfo_all_blocks=1 00:22:36.776 --rc geninfo_unexecuted_blocks=1 00:22:36.776 00:22:36.776 ' 00:22:36.776 15:15:37 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:36.776 15:15:37 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:22:36.776 15:15:37 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:36.776 15:15:37 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:36.776 15:15:37 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:36.776 15:15:37 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:36.776 15:15:37 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:36.776 15:15:37 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:36.776 15:15:37 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:36.776 15:15:37 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:36.776 15:15:37 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:36.776 15:15:37 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:36.776 15:15:37 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:36.776 15:15:37 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:36.776 15:15:37 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:36.776 15:15:37 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:36.776 15:15:37 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:36.776 15:15:37 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:36.776 15:15:37 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:36.776 15:15:37 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:36.776 15:15:37 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:36.776 15:15:37 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:36.776 15:15:37 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:36.776 15:15:37 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:36.776 15:15:37 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:36.776 15:15:37 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:36.776 15:15:37 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:36.776 15:15:37 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:36.776 15:15:37 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:36.776 15:15:37 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:36.776 15:15:37 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:22:36.776 15:15:37 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:22:36.776 15:15:37 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:22:36.776 15:15:37 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:22:36.776 15:15:37 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:22:36.776 15:15:37 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:22:36.776 15:15:37 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:22:36.776 15:15:37 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:22:36.776 15:15:37 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:36.776 15:15:37 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:36.776 15:15:37 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:22:36.776 15:15:37 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=78461 00:22:36.776 15:15:37 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 78461 00:22:36.776 15:15:37 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:22:36.776 15:15:37 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78461 ']' 00:22:36.776 15:15:37 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:36.776 15:15:37 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:36.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:36.776 15:15:37 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:36.776 15:15:37 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:36.776 15:15:37 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:22:36.776 [2024-11-20 15:15:37.523376] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:22:36.776 [2024-11-20 15:15:37.523558] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78461 ] 00:22:37.035 [2024-11-20 15:15:37.715087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:37.294 [2024-11-20 15:15:37.871210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:37.294 [2024-11-20 15:15:37.871355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:37.294 [2024-11-20 15:15:37.871391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:38.229 15:15:38 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:38.229 15:15:38 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:22:38.229 15:15:38 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:38.229 15:15:38 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:22:38.229 15:15:38 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:38.229 15:15:38 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:22:38.229 15:15:38 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:22:38.229 15:15:38 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:38.487 15:15:39 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:38.487 15:15:39 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:22:38.487 15:15:39 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:38.487 15:15:39 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:22:38.487 15:15:39 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:38.487 15:15:39 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:22:38.487 15:15:39 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:22:38.487 15:15:39 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:38.746 15:15:39 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:38.746 { 00:22:38.746 "name": "nvme0n1", 00:22:38.746 "aliases": [ 00:22:38.746 "b33bc04a-609b-4c11-a725-befdcb68e40b" 00:22:38.746 ], 00:22:38.746 "product_name": "NVMe disk", 00:22:38.746 "block_size": 4096, 00:22:38.746 "num_blocks": 1310720, 00:22:38.746 "uuid": "b33bc04a-609b-4c11-a725-befdcb68e40b", 00:22:38.746 "numa_id": -1, 00:22:38.746 "assigned_rate_limits": { 00:22:38.746 "rw_ios_per_sec": 0, 00:22:38.746 "rw_mbytes_per_sec": 0, 00:22:38.746 "r_mbytes_per_sec": 0, 00:22:38.746 "w_mbytes_per_sec": 0 00:22:38.746 }, 00:22:38.746 "claimed": true, 00:22:38.746 "claim_type": "read_many_write_one", 00:22:38.746 "zoned": false, 00:22:38.746 "supported_io_types": { 00:22:38.746 "read": true, 00:22:38.746 "write": true, 00:22:38.746 "unmap": true, 00:22:38.746 "flush": true, 00:22:38.746 "reset": true, 00:22:38.746 "nvme_admin": true, 00:22:38.746 "nvme_io": true, 00:22:38.746 "nvme_io_md": false, 00:22:38.746 "write_zeroes": true, 00:22:38.746 "zcopy": false, 00:22:38.746 "get_zone_info": false, 00:22:38.746 "zone_management": false, 00:22:38.746 "zone_append": false, 00:22:38.746 "compare": true, 00:22:38.746 "compare_and_write": false, 00:22:38.746 "abort": true, 00:22:38.746 "seek_hole": false, 00:22:38.746 "seek_data": false, 00:22:38.746 "copy": true, 00:22:38.746 "nvme_iov_md": false 00:22:38.746 }, 00:22:38.746 "driver_specific": { 00:22:38.746 "nvme": [ 00:22:38.746 { 00:22:38.746 "pci_address": "0000:00:11.0", 00:22:38.746 "trid": { 00:22:38.746 "trtype": "PCIe", 00:22:38.746 "traddr": "0000:00:11.0" 00:22:38.746 }, 00:22:38.746 "ctrlr_data": { 00:22:38.746 "cntlid": 0, 00:22:38.746 "vendor_id": "0x1b36", 00:22:38.746 "model_number": "QEMU NVMe Ctrl", 00:22:38.746 "serial_number": "12341", 00:22:38.746 "firmware_revision": "8.0.0", 00:22:38.746 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:38.746 "oacs": { 00:22:38.746 "security": 0, 00:22:38.746 "format": 1, 00:22:38.746 "firmware": 0, 00:22:38.746 "ns_manage": 1 00:22:38.746 }, 00:22:38.746 "multi_ctrlr": false, 00:22:38.746 "ana_reporting": false 00:22:38.746 }, 00:22:38.746 "vs": { 00:22:38.746 "nvme_version": "1.4" 00:22:38.746 }, 00:22:38.746 "ns_data": { 00:22:38.746 "id": 1, 00:22:38.746 "can_share": false 00:22:38.746 } 00:22:38.746 } 00:22:38.746 ], 00:22:38.746 "mp_policy": "active_passive" 00:22:38.746 } 00:22:38.746 } 00:22:38.746 ]' 00:22:38.746 15:15:39 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:38.746 15:15:39 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:22:38.746 15:15:39 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:39.005 15:15:39 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:22:39.005 15:15:39 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:22:39.005 15:15:39 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:22:39.005 15:15:39 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:22:39.005 15:15:39 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:39.005 15:15:39 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:22:39.005 15:15:39 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:39.005 15:15:39 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:39.263 15:15:39 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=acceff51-e065-4d4a-88fa-dc5b898f666d 00:22:39.263 15:15:39 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:22:39.263 15:15:39 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u acceff51-e065-4d4a-88fa-dc5b898f666d 00:22:39.522 15:15:40 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:39.780 15:15:40 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=15b83965-7e50-4b17-b046-a89e1ba8e36f 00:22:39.780 15:15:40 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 15b83965-7e50-4b17-b046-a89e1ba8e36f 00:22:40.039 15:15:40 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=5ef374a6-c6fe-4b09-b5ed-f0ba5eac1c50 00:22:40.039 15:15:40 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 5ef374a6-c6fe-4b09-b5ed-f0ba5eac1c50 00:22:40.039 15:15:40 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:22:40.039 15:15:40 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:40.039 15:15:40 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=5ef374a6-c6fe-4b09-b5ed-f0ba5eac1c50 00:22:40.039 15:15:40 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:22:40.040 15:15:40 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 5ef374a6-c6fe-4b09-b5ed-f0ba5eac1c50 00:22:40.040 15:15:40 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=5ef374a6-c6fe-4b09-b5ed-f0ba5eac1c50 00:22:40.040 15:15:40 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:40.040 15:15:40 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:22:40.040 15:15:40 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:22:40.040 15:15:40 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5ef374a6-c6fe-4b09-b5ed-f0ba5eac1c50 00:22:40.298 15:15:40 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:40.298 { 00:22:40.299 "name": "5ef374a6-c6fe-4b09-b5ed-f0ba5eac1c50", 00:22:40.299 "aliases": [ 00:22:40.299 "lvs/nvme0n1p0" 00:22:40.299 ], 00:22:40.299 "product_name": "Logical Volume", 00:22:40.299 "block_size": 4096, 00:22:40.299 "num_blocks": 26476544, 00:22:40.299 "uuid": "5ef374a6-c6fe-4b09-b5ed-f0ba5eac1c50", 00:22:40.299 "assigned_rate_limits": { 00:22:40.299 "rw_ios_per_sec": 0, 00:22:40.299 "rw_mbytes_per_sec": 0, 00:22:40.299 "r_mbytes_per_sec": 0, 00:22:40.299 "w_mbytes_per_sec": 0 00:22:40.299 }, 00:22:40.299 "claimed": false, 00:22:40.299 "zoned": false, 00:22:40.299 "supported_io_types": { 00:22:40.299 "read": true, 00:22:40.299 "write": true, 00:22:40.299 "unmap": true, 00:22:40.299 "flush": false, 00:22:40.299 "reset": true, 00:22:40.299 "nvme_admin": false, 00:22:40.299 "nvme_io": false, 00:22:40.299 "nvme_io_md": false, 00:22:40.299 "write_zeroes": true, 00:22:40.299 "zcopy": false, 00:22:40.299 "get_zone_info": false, 00:22:40.299 "zone_management": false, 00:22:40.299 "zone_append": false, 00:22:40.299 "compare": false, 00:22:40.299 "compare_and_write": false, 00:22:40.299 "abort": false, 00:22:40.299 "seek_hole": true, 00:22:40.299 "seek_data": true, 00:22:40.299 "copy": false, 00:22:40.299 "nvme_iov_md": false 00:22:40.299 }, 00:22:40.299 "driver_specific": { 00:22:40.299 "lvol": { 00:22:40.299 "lvol_store_uuid": "15b83965-7e50-4b17-b046-a89e1ba8e36f", 00:22:40.299 "base_bdev": "nvme0n1", 00:22:40.299 "thin_provision": true, 00:22:40.299 "num_allocated_clusters": 0, 00:22:40.299 "snapshot": false, 00:22:40.299 "clone": false, 00:22:40.299 "esnap_clone": false 00:22:40.299 } 00:22:40.299 } 00:22:40.299 } 00:22:40.299 ]' 00:22:40.299 15:15:40 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:40.299 15:15:40 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:22:40.299 15:15:40 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:40.299 15:15:40 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:40.299 15:15:40 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:40.299 15:15:40 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:22:40.299 15:15:40 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:22:40.299 15:15:40 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:22:40.299 15:15:40 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:40.580 15:15:41 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:40.580 15:15:41 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:40.580 15:15:41 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 5ef374a6-c6fe-4b09-b5ed-f0ba5eac1c50 00:22:40.580 15:15:41 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=5ef374a6-c6fe-4b09-b5ed-f0ba5eac1c50 00:22:40.580 15:15:41 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:40.580 15:15:41 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:22:40.580 15:15:41 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:22:40.580 15:15:41 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5ef374a6-c6fe-4b09-b5ed-f0ba5eac1c50 00:22:40.851 15:15:41 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:40.851 { 00:22:40.851 "name": "5ef374a6-c6fe-4b09-b5ed-f0ba5eac1c50", 00:22:40.851 "aliases": [ 00:22:40.851 "lvs/nvme0n1p0" 00:22:40.851 ], 00:22:40.851 "product_name": "Logical Volume", 00:22:40.851 "block_size": 4096, 00:22:40.851 "num_blocks": 26476544, 00:22:40.851 "uuid": "5ef374a6-c6fe-4b09-b5ed-f0ba5eac1c50", 00:22:40.851 "assigned_rate_limits": { 00:22:40.851 "rw_ios_per_sec": 0, 00:22:40.851 "rw_mbytes_per_sec": 0, 00:22:40.851 "r_mbytes_per_sec": 0, 00:22:40.851 "w_mbytes_per_sec": 0 00:22:40.851 }, 00:22:40.851 "claimed": false, 00:22:40.851 "zoned": false, 00:22:40.851 "supported_io_types": { 00:22:40.851 "read": true, 00:22:40.851 "write": true, 00:22:40.851 "unmap": true, 00:22:40.851 "flush": false, 00:22:40.851 "reset": true, 00:22:40.851 "nvme_admin": false, 00:22:40.851 "nvme_io": false, 00:22:40.851 "nvme_io_md": false, 00:22:40.851 "write_zeroes": true, 00:22:40.851 "zcopy": false, 00:22:40.851 "get_zone_info": false, 00:22:40.851 "zone_management": false, 00:22:40.851 "zone_append": false, 00:22:40.851 "compare": false, 00:22:40.851 "compare_and_write": false, 00:22:40.851 "abort": false, 00:22:40.851 "seek_hole": true, 00:22:40.851 "seek_data": true, 00:22:40.851 "copy": false, 00:22:40.851 "nvme_iov_md": false 00:22:40.851 }, 00:22:40.851 "driver_specific": { 00:22:40.851 "lvol": { 00:22:40.851 "lvol_store_uuid": "15b83965-7e50-4b17-b046-a89e1ba8e36f", 00:22:40.851 "base_bdev": "nvme0n1", 00:22:40.851 "thin_provision": true, 00:22:40.851 "num_allocated_clusters": 0, 00:22:40.851 "snapshot": false, 00:22:40.851 "clone": false, 00:22:40.851 "esnap_clone": false 00:22:40.851 } 00:22:40.851 } 00:22:40.851 } 00:22:40.851 ]' 00:22:40.851 15:15:41 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:40.851 15:15:41 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:22:40.851 15:15:41 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:40.851 15:15:41 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:40.851 15:15:41 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:40.851 15:15:41 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:22:40.851 15:15:41 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:22:40.852 15:15:41 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:41.110 15:15:41 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:22:41.110 15:15:41 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:22:41.110 15:15:41 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 5ef374a6-c6fe-4b09-b5ed-f0ba5eac1c50 00:22:41.110 15:15:41 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=5ef374a6-c6fe-4b09-b5ed-f0ba5eac1c50 00:22:41.110 15:15:41 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:41.110 15:15:41 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:22:41.110 15:15:41 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:22:41.110 15:15:41 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5ef374a6-c6fe-4b09-b5ed-f0ba5eac1c50 00:22:41.369 15:15:42 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:41.369 { 00:22:41.369 "name": "5ef374a6-c6fe-4b09-b5ed-f0ba5eac1c50", 00:22:41.369 "aliases": [ 00:22:41.369 "lvs/nvme0n1p0" 00:22:41.369 ], 00:22:41.369 "product_name": "Logical Volume", 00:22:41.369 "block_size": 4096, 00:22:41.369 "num_blocks": 26476544, 00:22:41.369 "uuid": "5ef374a6-c6fe-4b09-b5ed-f0ba5eac1c50", 00:22:41.369 "assigned_rate_limits": { 00:22:41.369 "rw_ios_per_sec": 0, 00:22:41.369 "rw_mbytes_per_sec": 0, 00:22:41.369 "r_mbytes_per_sec": 0, 00:22:41.369 "w_mbytes_per_sec": 0 00:22:41.369 }, 00:22:41.369 "claimed": false, 00:22:41.369 "zoned": false, 00:22:41.369 "supported_io_types": { 00:22:41.369 "read": true, 00:22:41.369 "write": true, 00:22:41.369 "unmap": true, 00:22:41.369 "flush": false, 00:22:41.369 "reset": true, 00:22:41.369 "nvme_admin": false, 00:22:41.369 "nvme_io": false, 00:22:41.369 "nvme_io_md": false, 00:22:41.369 "write_zeroes": true, 00:22:41.369 "zcopy": false, 00:22:41.369 "get_zone_info": false, 00:22:41.369 "zone_management": false, 00:22:41.369 "zone_append": false, 00:22:41.369 "compare": false, 00:22:41.369 "compare_and_write": false, 00:22:41.369 "abort": false, 00:22:41.369 "seek_hole": true, 00:22:41.369 "seek_data": true, 00:22:41.369 "copy": false, 00:22:41.369 "nvme_iov_md": false 00:22:41.369 }, 00:22:41.370 "driver_specific": { 00:22:41.370 "lvol": { 00:22:41.370 "lvol_store_uuid": "15b83965-7e50-4b17-b046-a89e1ba8e36f", 00:22:41.370 "base_bdev": "nvme0n1", 00:22:41.370 "thin_provision": true, 00:22:41.370 "num_allocated_clusters": 0, 00:22:41.370 "snapshot": false, 00:22:41.370 "clone": false, 00:22:41.370 "esnap_clone": false 00:22:41.370 } 00:22:41.370 } 00:22:41.370 } 00:22:41.370 ]' 00:22:41.370 15:15:42 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:41.370 15:15:42 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:22:41.628 15:15:42 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:41.628 15:15:42 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:41.628 15:15:42 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:41.628 15:15:42 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:22:41.628 15:15:42 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:22:41.628 15:15:42 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 5ef374a6-c6fe-4b09-b5ed-f0ba5eac1c50 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:22:41.628 [2024-11-20 15:15:42.445861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.628 [2024-11-20 15:15:42.445954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:41.628 [2024-11-20 15:15:42.445994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:41.628 [2024-11-20 15:15:42.446015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.628 [2024-11-20 15:15:42.450579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.628 [2024-11-20 15:15:42.450648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:41.628 [2024-11-20 15:15:42.450679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.468 ms 00:22:41.628 [2024-11-20 15:15:42.450702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.628 [2024-11-20 15:15:42.450975] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:41.628 [2024-11-20 15:15:42.452104] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:41.628 [2024-11-20 15:15:42.452178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.628 [2024-11-20 15:15:42.452205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:41.629 [2024-11-20 15:15:42.452233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.218 ms 00:22:41.629 [2024-11-20 15:15:42.452256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.629 [2024-11-20 15:15:42.452630] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 9ea78f07-8cfa-4fd3-80ff-f63c8fb0b0f1 00:22:41.629 [2024-11-20 15:15:42.454634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.629 [2024-11-20 15:15:42.454695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:41.629 [2024-11-20 15:15:42.454738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:22:41.629 [2024-11-20 15:15:42.454766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.889 [2024-11-20 15:15:42.464332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.889 [2024-11-20 15:15:42.464409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:41.889 [2024-11-20 15:15:42.464440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.340 ms 00:22:41.889 [2024-11-20 15:15:42.464467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.889 [2024-11-20 15:15:42.464770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.889 [2024-11-20 15:15:42.464822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:41.889 [2024-11-20 15:15:42.464848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.151 ms 00:22:41.889 [2024-11-20 15:15:42.464882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.889 [2024-11-20 15:15:42.464996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.889 [2024-11-20 15:15:42.465032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:41.889 [2024-11-20 15:15:42.465056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:41.889 [2024-11-20 15:15:42.465085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.889 [2024-11-20 15:15:42.465184] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:41.889 [2024-11-20 15:15:42.471087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.889 [2024-11-20 15:15:42.471150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:41.889 [2024-11-20 15:15:42.471181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.919 ms 00:22:41.889 [2024-11-20 15:15:42.471202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.889 [2024-11-20 15:15:42.471353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.889 [2024-11-20 15:15:42.471385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:41.889 [2024-11-20 15:15:42.471414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:41.889 [2024-11-20 15:15:42.471461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.889 [2024-11-20 15:15:42.471562] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:41.889 [2024-11-20 15:15:42.471759] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:41.889 [2024-11-20 15:15:42.471806] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:41.889 [2024-11-20 15:15:42.471833] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:41.889 [2024-11-20 15:15:42.471862] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:41.889 [2024-11-20 15:15:42.471885] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:41.889 [2024-11-20 15:15:42.471911] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:41.889 [2024-11-20 15:15:42.471931] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:41.889 [2024-11-20 15:15:42.471957] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:41.889 [2024-11-20 15:15:42.471980] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:41.889 [2024-11-20 15:15:42.472009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.889 [2024-11-20 15:15:42.472031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:41.889 [2024-11-20 15:15:42.472059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.450 ms 00:22:41.889 [2024-11-20 15:15:42.472079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.889 [2024-11-20 15:15:42.472247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.889 [2024-11-20 15:15:42.472295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:41.889 [2024-11-20 15:15:42.472324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:22:41.889 [2024-11-20 15:15:42.472346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.889 [2024-11-20 15:15:42.472575] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:41.889 [2024-11-20 15:15:42.472623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:41.889 [2024-11-20 15:15:42.472653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:41.889 [2024-11-20 15:15:42.472675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:41.889 [2024-11-20 15:15:42.472700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:41.889 [2024-11-20 15:15:42.472745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:41.889 [2024-11-20 15:15:42.472772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:41.889 [2024-11-20 15:15:42.472791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:41.889 [2024-11-20 15:15:42.472814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:41.889 [2024-11-20 15:15:42.472833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:41.889 [2024-11-20 15:15:42.472857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:41.889 [2024-11-20 15:15:42.472876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:41.889 [2024-11-20 15:15:42.472898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:41.889 [2024-11-20 15:15:42.472917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:41.889 [2024-11-20 15:15:42.472940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:41.889 [2024-11-20 15:15:42.472962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:41.889 [2024-11-20 15:15:42.472989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:41.889 [2024-11-20 15:15:42.473009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:41.889 [2024-11-20 15:15:42.473036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:41.889 [2024-11-20 15:15:42.473056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:41.889 [2024-11-20 15:15:42.473079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:41.889 [2024-11-20 15:15:42.473097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:41.889 [2024-11-20 15:15:42.473120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:41.889 [2024-11-20 15:15:42.473141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:41.889 [2024-11-20 15:15:42.473167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:41.889 [2024-11-20 15:15:42.473185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:41.889 [2024-11-20 15:15:42.473209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:41.889 [2024-11-20 15:15:42.473229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:41.889 [2024-11-20 15:15:42.473253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:41.889 [2024-11-20 15:15:42.473272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:41.889 [2024-11-20 15:15:42.473295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:41.889 [2024-11-20 15:15:42.473314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:41.889 [2024-11-20 15:15:42.473341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:41.889 [2024-11-20 15:15:42.473359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:41.889 [2024-11-20 15:15:42.473382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:41.889 [2024-11-20 15:15:42.473403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:41.889 [2024-11-20 15:15:42.473425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:41.889 [2024-11-20 15:15:42.473444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:41.889 [2024-11-20 15:15:42.473467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:41.889 [2024-11-20 15:15:42.473485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:41.889 [2024-11-20 15:15:42.473507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:41.889 [2024-11-20 15:15:42.473526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:41.889 [2024-11-20 15:15:42.473550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:41.889 [2024-11-20 15:15:42.473569] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:41.889 [2024-11-20 15:15:42.473605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:41.889 [2024-11-20 15:15:42.473625] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:41.889 [2024-11-20 15:15:42.473653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:41.889 [2024-11-20 15:15:42.473673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:41.889 [2024-11-20 15:15:42.473699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:41.889 [2024-11-20 15:15:42.473739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:41.890 [2024-11-20 15:15:42.473766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:41.890 [2024-11-20 15:15:42.473786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:41.890 [2024-11-20 15:15:42.473809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:41.890 [2024-11-20 15:15:42.473835] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:41.890 [2024-11-20 15:15:42.473864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:41.890 [2024-11-20 15:15:42.473891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:41.890 [2024-11-20 15:15:42.473917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:41.890 [2024-11-20 15:15:42.473938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:41.890 [2024-11-20 15:15:42.473963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:41.890 [2024-11-20 15:15:42.473989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:41.890 [2024-11-20 15:15:42.474015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:41.890 [2024-11-20 15:15:42.474035] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:41.890 [2024-11-20 15:15:42.474060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:41.890 [2024-11-20 15:15:42.474081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:41.890 [2024-11-20 15:15:42.474107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:41.890 [2024-11-20 15:15:42.474129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:41.890 [2024-11-20 15:15:42.474154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:41.890 [2024-11-20 15:15:42.474175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:41.890 [2024-11-20 15:15:42.474201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:41.890 [2024-11-20 15:15:42.474222] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:41.890 [2024-11-20 15:15:42.474253] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:41.890 [2024-11-20 15:15:42.474277] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:41.890 [2024-11-20 15:15:42.474302] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:41.890 [2024-11-20 15:15:42.474323] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:41.890 [2024-11-20 15:15:42.474348] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:41.890 [2024-11-20 15:15:42.474371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.890 [2024-11-20 15:15:42.474396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:41.890 [2024-11-20 15:15:42.474419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.868 ms 00:22:41.890 [2024-11-20 15:15:42.474443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.890 [2024-11-20 15:15:42.474714] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:41.890 [2024-11-20 15:15:42.474798] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:45.177 [2024-11-20 15:15:45.504264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.177 [2024-11-20 15:15:45.504361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:45.177 [2024-11-20 15:15:45.504384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3034.467 ms 00:22:45.177 [2024-11-20 15:15:45.504404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.177 [2024-11-20 15:15:45.554685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.177 [2024-11-20 15:15:45.554776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:45.177 [2024-11-20 15:15:45.554796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.912 ms 00:22:45.177 [2024-11-20 15:15:45.554811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.177 [2024-11-20 15:15:45.555024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.177 [2024-11-20 15:15:45.555048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:45.177 [2024-11-20 15:15:45.555060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:22:45.177 [2024-11-20 15:15:45.555079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.177 [2024-11-20 15:15:45.618850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.177 [2024-11-20 15:15:45.618935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:45.177 [2024-11-20 15:15:45.618955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.813 ms 00:22:45.177 [2024-11-20 15:15:45.618973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.177 [2024-11-20 15:15:45.619156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.177 [2024-11-20 15:15:45.619177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:45.177 [2024-11-20 15:15:45.619190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:45.177 [2024-11-20 15:15:45.619207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.177 [2024-11-20 15:15:45.620018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.177 [2024-11-20 15:15:45.620067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:45.177 [2024-11-20 15:15:45.620081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.781 ms 00:22:45.177 [2024-11-20 15:15:45.620098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.177 [2024-11-20 15:15:45.620240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.177 [2024-11-20 15:15:45.620262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:45.177 [2024-11-20 15:15:45.620274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:22:45.177 [2024-11-20 15:15:45.620298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.177 [2024-11-20 15:15:45.646090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.177 [2024-11-20 15:15:45.646159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:45.177 [2024-11-20 15:15:45.646179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.777 ms 00:22:45.177 [2024-11-20 15:15:45.646195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.177 [2024-11-20 15:15:45.662443] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:45.177 [2024-11-20 15:15:45.689954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.177 [2024-11-20 15:15:45.690031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:45.177 [2024-11-20 15:15:45.690053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.645 ms 00:22:45.177 [2024-11-20 15:15:45.690066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.177 [2024-11-20 15:15:45.785456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.177 [2024-11-20 15:15:45.785533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:45.177 [2024-11-20 15:15:45.785558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 95.393 ms 00:22:45.177 [2024-11-20 15:15:45.785570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.177 [2024-11-20 15:15:45.785885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.177 [2024-11-20 15:15:45.785905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:45.177 [2024-11-20 15:15:45.785927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:22:45.177 [2024-11-20 15:15:45.785939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.177 [2024-11-20 15:15:45.827589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.177 [2024-11-20 15:15:45.827661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:45.177 [2024-11-20 15:15:45.827685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.661 ms 00:22:45.177 [2024-11-20 15:15:45.827697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.177 [2024-11-20 15:15:45.868993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.177 [2024-11-20 15:15:45.869069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:45.177 [2024-11-20 15:15:45.869093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.176 ms 00:22:45.177 [2024-11-20 15:15:45.869104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.177 [2024-11-20 15:15:45.870197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.177 [2024-11-20 15:15:45.870230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:45.177 [2024-11-20 15:15:45.870247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.972 ms 00:22:45.177 [2024-11-20 15:15:45.870259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.177 [2024-11-20 15:15:45.982921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.177 [2024-11-20 15:15:45.983013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:45.177 [2024-11-20 15:15:45.983042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 112.780 ms 00:22:45.177 [2024-11-20 15:15:45.983054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.437 [2024-11-20 15:15:46.025384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.437 [2024-11-20 15:15:46.025471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:45.437 [2024-11-20 15:15:46.025494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.207 ms 00:22:45.437 [2024-11-20 15:15:46.025506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.437 [2024-11-20 15:15:46.067683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.437 [2024-11-20 15:15:46.067779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:45.437 [2024-11-20 15:15:46.067804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.062 ms 00:22:45.437 [2024-11-20 15:15:46.067815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.437 [2024-11-20 15:15:46.106912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.437 [2024-11-20 15:15:46.106982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:45.437 [2024-11-20 15:15:46.107005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.011 ms 00:22:45.437 [2024-11-20 15:15:46.107034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.437 [2024-11-20 15:15:46.107150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.437 [2024-11-20 15:15:46.107169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:45.437 [2024-11-20 15:15:46.107189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:45.437 [2024-11-20 15:15:46.107200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.437 [2024-11-20 15:15:46.107312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.437 [2024-11-20 15:15:46.107326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:45.437 [2024-11-20 15:15:46.107340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:22:45.437 [2024-11-20 15:15:46.107356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.437 [2024-11-20 15:15:46.108772] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:45.437 [2024-11-20 15:15:46.113729] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3668.611 ms, result 0 00:22:45.437 [2024-11-20 15:15:46.114758] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:45.437 { 00:22:45.437 "name": "ftl0", 00:22:45.437 "uuid": "9ea78f07-8cfa-4fd3-80ff-f63c8fb0b0f1" 00:22:45.437 } 00:22:45.437 15:15:46 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:22:45.437 15:15:46 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:22:45.437 15:15:46 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:45.437 15:15:46 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:22:45.437 15:15:46 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:45.437 15:15:46 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:45.437 15:15:46 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:22:45.696 15:15:46 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:22:45.955 [ 00:22:45.955 { 00:22:45.955 "name": "ftl0", 00:22:45.955 "aliases": [ 00:22:45.955 "9ea78f07-8cfa-4fd3-80ff-f63c8fb0b0f1" 00:22:45.955 ], 00:22:45.955 "product_name": "FTL disk", 00:22:45.955 "block_size": 4096, 00:22:45.955 "num_blocks": 23592960, 00:22:45.955 "uuid": "9ea78f07-8cfa-4fd3-80ff-f63c8fb0b0f1", 00:22:45.955 "assigned_rate_limits": { 00:22:45.955 "rw_ios_per_sec": 0, 00:22:45.955 "rw_mbytes_per_sec": 0, 00:22:45.955 "r_mbytes_per_sec": 0, 00:22:45.955 "w_mbytes_per_sec": 0 00:22:45.955 }, 00:22:45.955 "claimed": false, 00:22:45.955 "zoned": false, 00:22:45.955 "supported_io_types": { 00:22:45.955 "read": true, 00:22:45.955 "write": true, 00:22:45.955 "unmap": true, 00:22:45.955 "flush": true, 00:22:45.955 "reset": false, 00:22:45.955 "nvme_admin": false, 00:22:45.955 "nvme_io": false, 00:22:45.955 "nvme_io_md": false, 00:22:45.955 "write_zeroes": true, 00:22:45.955 "zcopy": false, 00:22:45.955 "get_zone_info": false, 00:22:45.955 "zone_management": false, 00:22:45.955 "zone_append": false, 00:22:45.955 "compare": false, 00:22:45.955 "compare_and_write": false, 00:22:45.955 "abort": false, 00:22:45.955 "seek_hole": false, 00:22:45.955 "seek_data": false, 00:22:45.955 "copy": false, 00:22:45.955 "nvme_iov_md": false 00:22:45.955 }, 00:22:45.955 "driver_specific": { 00:22:45.955 "ftl": { 00:22:45.955 "base_bdev": "5ef374a6-c6fe-4b09-b5ed-f0ba5eac1c50", 00:22:45.955 "cache": "nvc0n1p0" 00:22:45.955 } 00:22:45.955 } 00:22:45.955 } 00:22:45.955 ] 00:22:45.955 15:15:46 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:22:45.955 15:15:46 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:22:45.955 15:15:46 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:46.214 15:15:46 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:22:46.214 15:15:46 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:22:46.473 15:15:47 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:22:46.473 { 00:22:46.473 "name": "ftl0", 00:22:46.473 "aliases": [ 00:22:46.473 "9ea78f07-8cfa-4fd3-80ff-f63c8fb0b0f1" 00:22:46.473 ], 00:22:46.473 "product_name": "FTL disk", 00:22:46.473 "block_size": 4096, 00:22:46.473 "num_blocks": 23592960, 00:22:46.473 "uuid": "9ea78f07-8cfa-4fd3-80ff-f63c8fb0b0f1", 00:22:46.473 "assigned_rate_limits": { 00:22:46.473 "rw_ios_per_sec": 0, 00:22:46.473 "rw_mbytes_per_sec": 0, 00:22:46.473 "r_mbytes_per_sec": 0, 00:22:46.473 "w_mbytes_per_sec": 0 00:22:46.473 }, 00:22:46.473 "claimed": false, 00:22:46.473 "zoned": false, 00:22:46.473 "supported_io_types": { 00:22:46.473 "read": true, 00:22:46.473 "write": true, 00:22:46.473 "unmap": true, 00:22:46.473 "flush": true, 00:22:46.473 "reset": false, 00:22:46.473 "nvme_admin": false, 00:22:46.473 "nvme_io": false, 00:22:46.473 "nvme_io_md": false, 00:22:46.473 "write_zeroes": true, 00:22:46.473 "zcopy": false, 00:22:46.473 "get_zone_info": false, 00:22:46.473 "zone_management": false, 00:22:46.473 "zone_append": false, 00:22:46.473 "compare": false, 00:22:46.473 "compare_and_write": false, 00:22:46.473 "abort": false, 00:22:46.473 "seek_hole": false, 00:22:46.473 "seek_data": false, 00:22:46.473 "copy": false, 00:22:46.473 "nvme_iov_md": false 00:22:46.473 }, 00:22:46.473 "driver_specific": { 00:22:46.473 "ftl": { 00:22:46.473 "base_bdev": "5ef374a6-c6fe-4b09-b5ed-f0ba5eac1c50", 00:22:46.473 "cache": "nvc0n1p0" 00:22:46.473 } 00:22:46.473 } 00:22:46.473 } 00:22:46.473 ]' 00:22:46.473 15:15:47 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:22:46.473 15:15:47 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:22:46.473 15:15:47 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:22:46.732 [2024-11-20 15:15:47.348275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.732 [2024-11-20 15:15:47.348363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:46.732 [2024-11-20 15:15:47.348388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:46.732 [2024-11-20 15:15:47.348407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.732 [2024-11-20 15:15:47.348452] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:46.732 [2024-11-20 15:15:47.353285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.732 [2024-11-20 15:15:47.353339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:46.732 [2024-11-20 15:15:47.353370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.807 ms 00:22:46.732 [2024-11-20 15:15:47.353383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.732 [2024-11-20 15:15:47.354157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.732 [2024-11-20 15:15:47.354184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:46.732 [2024-11-20 15:15:47.354202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.642 ms 00:22:46.732 [2024-11-20 15:15:47.354214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.732 [2024-11-20 15:15:47.357485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.732 [2024-11-20 15:15:47.357721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:46.732 [2024-11-20 15:15:47.357839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.227 ms 00:22:46.732 [2024-11-20 15:15:47.357886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.732 [2024-11-20 15:15:47.363704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.732 [2024-11-20 15:15:47.363983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:46.732 [2024-11-20 15:15:47.364083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.686 ms 00:22:46.732 [2024-11-20 15:15:47.364127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.732 [2024-11-20 15:15:47.408374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.732 [2024-11-20 15:15:47.408770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:46.732 [2024-11-20 15:15:47.408895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.086 ms 00:22:46.732 [2024-11-20 15:15:47.408937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.732 [2024-11-20 15:15:47.436181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.732 [2024-11-20 15:15:47.436570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:46.732 [2024-11-20 15:15:47.436614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.053 ms 00:22:46.732 [2024-11-20 15:15:47.436632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.732 [2024-11-20 15:15:47.437065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.732 [2024-11-20 15:15:47.437084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:46.732 [2024-11-20 15:15:47.437102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.176 ms 00:22:46.732 [2024-11-20 15:15:47.437114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.732 [2024-11-20 15:15:47.483682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.732 [2024-11-20 15:15:47.483800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:46.732 [2024-11-20 15:15:47.483827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.588 ms 00:22:46.732 [2024-11-20 15:15:47.483839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.732 [2024-11-20 15:15:47.531752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.732 [2024-11-20 15:15:47.531860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:46.732 [2024-11-20 15:15:47.531889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.750 ms 00:22:46.732 [2024-11-20 15:15:47.531900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.992 [2024-11-20 15:15:47.576406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.992 [2024-11-20 15:15:47.576533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:46.992 [2024-11-20 15:15:47.576561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.367 ms 00:22:46.993 [2024-11-20 15:15:47.576575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.993 [2024-11-20 15:15:47.622815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.993 [2024-11-20 15:15:47.622914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:46.993 [2024-11-20 15:15:47.622937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.985 ms 00:22:46.993 [2024-11-20 15:15:47.622949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.993 [2024-11-20 15:15:47.623144] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:46.993 [2024-11-20 15:15:47.623170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.623188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.623200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.623215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.623226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.623246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.623258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.623273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.623285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.623300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.623312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.623326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.623337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.623352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.623362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.623376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.623387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.623401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.623412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.623430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.623441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.623486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.623498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.623513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.623524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.623538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.623550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.623565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.623576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.623591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.623603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.623617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.623628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.623642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.623653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.623667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.623678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.623695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.623707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.623748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.623760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.623792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.623804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.623821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.623832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.623848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.623859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.623877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.623888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.623904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.623916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.623931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.623943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.623961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.623974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.623989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.624002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.624016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.624028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.624042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.624054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.624074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.624085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.624100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.624111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.624127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.624138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.624154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.624166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.624185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.624197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.624212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.624223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.624238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.624249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:46.993 [2024-11-20 15:15:47.624264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:46.994 [2024-11-20 15:15:47.624276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:46.994 [2024-11-20 15:15:47.624291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:46.994 [2024-11-20 15:15:47.624303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:46.994 [2024-11-20 15:15:47.624318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:46.994 [2024-11-20 15:15:47.624329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:46.994 [2024-11-20 15:15:47.624344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:46.994 [2024-11-20 15:15:47.624358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:46.994 [2024-11-20 15:15:47.624373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:46.994 [2024-11-20 15:15:47.624385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:46.994 [2024-11-20 15:15:47.624402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:46.994 [2024-11-20 15:15:47.624414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:46.994 [2024-11-20 15:15:47.624430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:46.994 [2024-11-20 15:15:47.624442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:46.994 [2024-11-20 15:15:47.624456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:46.994 [2024-11-20 15:15:47.624467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:46.994 [2024-11-20 15:15:47.624482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:46.994 [2024-11-20 15:15:47.624494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:46.994 [2024-11-20 15:15:47.624510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:46.994 [2024-11-20 15:15:47.624523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:46.994 [2024-11-20 15:15:47.624537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:46.994 [2024-11-20 15:15:47.624549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:46.994 [2024-11-20 15:15:47.624566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:46.994 [2024-11-20 15:15:47.624577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:46.994 [2024-11-20 15:15:47.624592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:46.994 [2024-11-20 15:15:47.624613] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:46.994 [2024-11-20 15:15:47.624632] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9ea78f07-8cfa-4fd3-80ff-f63c8fb0b0f1 00:22:46.994 [2024-11-20 15:15:47.624645] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:46.994 [2024-11-20 15:15:47.624659] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:46.994 [2024-11-20 15:15:47.624669] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:46.994 [2024-11-20 15:15:47.624690] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:46.994 [2024-11-20 15:15:47.624701] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:46.994 [2024-11-20 15:15:47.624716] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:46.994 [2024-11-20 15:15:47.624727] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:46.994 [2024-11-20 15:15:47.624751] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:46.994 [2024-11-20 15:15:47.624761] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:46.994 [2024-11-20 15:15:47.624776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.994 [2024-11-20 15:15:47.624788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:46.994 [2024-11-20 15:15:47.624805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.640 ms 00:22:46.994 [2024-11-20 15:15:47.624817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.994 [2024-11-20 15:15:47.648270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.994 [2024-11-20 15:15:47.648365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:46.994 [2024-11-20 15:15:47.648392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.416 ms 00:22:46.994 [2024-11-20 15:15:47.648403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.994 [2024-11-20 15:15:47.649224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.994 [2024-11-20 15:15:47.649258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:46.994 [2024-11-20 15:15:47.649274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.642 ms 00:22:46.994 [2024-11-20 15:15:47.649286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.994 [2024-11-20 15:15:47.726858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:46.994 [2024-11-20 15:15:47.726956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:46.994 [2024-11-20 15:15:47.726978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:46.994 [2024-11-20 15:15:47.726990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.994 [2024-11-20 15:15:47.727192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:46.994 [2024-11-20 15:15:47.727206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:46.994 [2024-11-20 15:15:47.727220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:46.994 [2024-11-20 15:15:47.727231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.994 [2024-11-20 15:15:47.727330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:46.994 [2024-11-20 15:15:47.727345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:46.994 [2024-11-20 15:15:47.727367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:46.994 [2024-11-20 15:15:47.727378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.994 [2024-11-20 15:15:47.727422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:46.994 [2024-11-20 15:15:47.727434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:46.994 [2024-11-20 15:15:47.727447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:46.994 [2024-11-20 15:15:47.727458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.253 [2024-11-20 15:15:47.874393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:47.253 [2024-11-20 15:15:47.874508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:47.253 [2024-11-20 15:15:47.874531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:47.253 [2024-11-20 15:15:47.874544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.253 [2024-11-20 15:15:47.991277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:47.253 [2024-11-20 15:15:47.991586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:47.253 [2024-11-20 15:15:47.991640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:47.253 [2024-11-20 15:15:47.991653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.253 [2024-11-20 15:15:47.991873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:47.253 [2024-11-20 15:15:47.991889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:47.253 [2024-11-20 15:15:47.991930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:47.253 [2024-11-20 15:15:47.991945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.253 [2024-11-20 15:15:47.992023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:47.253 [2024-11-20 15:15:47.992035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:47.253 [2024-11-20 15:15:47.992050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:47.253 [2024-11-20 15:15:47.992061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.253 [2024-11-20 15:15:47.992253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:47.253 [2024-11-20 15:15:47.992268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:47.253 [2024-11-20 15:15:47.992283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:47.254 [2024-11-20 15:15:47.992299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.254 [2024-11-20 15:15:47.992368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:47.254 [2024-11-20 15:15:47.992382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:47.254 [2024-11-20 15:15:47.992398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:47.254 [2024-11-20 15:15:47.992410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.254 [2024-11-20 15:15:47.992504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:47.254 [2024-11-20 15:15:47.992517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:47.254 [2024-11-20 15:15:47.992536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:47.254 [2024-11-20 15:15:47.992547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.254 [2024-11-20 15:15:47.992630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:47.254 [2024-11-20 15:15:47.992643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:47.254 [2024-11-20 15:15:47.992658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:47.254 [2024-11-20 15:15:47.992669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.254 [2024-11-20 15:15:47.992952] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 645.689 ms, result 0 00:22:47.254 true 00:22:47.254 15:15:48 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 78461 00:22:47.254 15:15:48 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78461 ']' 00:22:47.254 15:15:48 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78461 00:22:47.254 15:15:48 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:22:47.254 15:15:48 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:47.254 15:15:48 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78461 00:22:47.254 killing process with pid 78461 00:22:47.254 15:15:48 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:47.254 15:15:48 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:47.254 15:15:48 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78461' 00:22:47.254 15:15:48 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78461 00:22:47.254 15:15:48 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78461 00:22:53.814 15:15:53 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:22:54.073 65536+0 records in 00:22:54.073 65536+0 records out 00:22:54.073 268435456 bytes (268 MB, 256 MiB) copied, 1.14551 s, 234 MB/s 00:22:54.073 15:15:54 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:54.073 [2024-11-20 15:15:54.860760] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:22:54.073 [2024-11-20 15:15:54.860939] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78677 ] 00:22:54.332 [2024-11-20 15:15:55.053240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.590 [2024-11-20 15:15:55.201928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:54.848 [2024-11-20 15:15:55.668012] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:54.848 [2024-11-20 15:15:55.668119] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:55.108 [2024-11-20 15:15:55.841539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.108 [2024-11-20 15:15:55.841632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:55.108 [2024-11-20 15:15:55.841653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:55.109 [2024-11-20 15:15:55.841664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.109 [2024-11-20 15:15:55.845399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.109 [2024-11-20 15:15:55.845687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:55.109 [2024-11-20 15:15:55.845735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.714 ms 00:22:55.109 [2024-11-20 15:15:55.845749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.109 [2024-11-20 15:15:55.846102] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:55.109 [2024-11-20 15:15:55.847336] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:55.109 [2024-11-20 15:15:55.847376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.109 [2024-11-20 15:15:55.847390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:55.109 [2024-11-20 15:15:55.847404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.294 ms 00:22:55.109 [2024-11-20 15:15:55.847416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.109 [2024-11-20 15:15:55.849997] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:55.109 [2024-11-20 15:15:55.873264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.109 [2024-11-20 15:15:55.873358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:55.109 [2024-11-20 15:15:55.873379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.302 ms 00:22:55.109 [2024-11-20 15:15:55.873391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.109 [2024-11-20 15:15:55.873614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.109 [2024-11-20 15:15:55.873633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:55.109 [2024-11-20 15:15:55.873645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:22:55.109 [2024-11-20 15:15:55.873657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.109 [2024-11-20 15:15:55.887372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.109 [2024-11-20 15:15:55.887431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:55.109 [2024-11-20 15:15:55.887451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.676 ms 00:22:55.109 [2024-11-20 15:15:55.887465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.109 [2024-11-20 15:15:55.887667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.109 [2024-11-20 15:15:55.887688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:55.109 [2024-11-20 15:15:55.887703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:22:55.109 [2024-11-20 15:15:55.887739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.109 [2024-11-20 15:15:55.887784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.109 [2024-11-20 15:15:55.887804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:55.109 [2024-11-20 15:15:55.887818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:55.109 [2024-11-20 15:15:55.887832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.109 [2024-11-20 15:15:55.887867] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:55.109 [2024-11-20 15:15:55.894248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.109 [2024-11-20 15:15:55.894502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:55.109 [2024-11-20 15:15:55.894536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.400 ms 00:22:55.109 [2024-11-20 15:15:55.894551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.109 [2024-11-20 15:15:55.894665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.109 [2024-11-20 15:15:55.894682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:55.109 [2024-11-20 15:15:55.894697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:22:55.109 [2024-11-20 15:15:55.894711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.109 [2024-11-20 15:15:55.894761] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:55.109 [2024-11-20 15:15:55.894799] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:55.109 [2024-11-20 15:15:55.894847] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:55.109 [2024-11-20 15:15:55.894873] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:55.109 [2024-11-20 15:15:55.894978] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:55.109 [2024-11-20 15:15:55.894996] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:55.109 [2024-11-20 15:15:55.895014] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:55.109 [2024-11-20 15:15:55.895032] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:55.109 [2024-11-20 15:15:55.895054] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:55.109 [2024-11-20 15:15:55.895069] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:55.109 [2024-11-20 15:15:55.895084] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:55.109 [2024-11-20 15:15:55.895098] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:55.109 [2024-11-20 15:15:55.895113] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:55.109 [2024-11-20 15:15:55.895128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.109 [2024-11-20 15:15:55.895142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:55.109 [2024-11-20 15:15:55.895156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.372 ms 00:22:55.109 [2024-11-20 15:15:55.895171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.109 [2024-11-20 15:15:55.895259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.109 [2024-11-20 15:15:55.895279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:55.109 [2024-11-20 15:15:55.895293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:22:55.109 [2024-11-20 15:15:55.895308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.109 [2024-11-20 15:15:55.895417] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:55.109 [2024-11-20 15:15:55.895434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:55.109 [2024-11-20 15:15:55.895449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:55.109 [2024-11-20 15:15:55.895464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:55.109 [2024-11-20 15:15:55.895478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:55.109 [2024-11-20 15:15:55.895492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:55.109 [2024-11-20 15:15:55.895505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:55.109 [2024-11-20 15:15:55.895519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:55.109 [2024-11-20 15:15:55.895538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:55.109 [2024-11-20 15:15:55.895551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:55.109 [2024-11-20 15:15:55.895564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:55.109 [2024-11-20 15:15:55.895578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:55.109 [2024-11-20 15:15:55.895591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:55.109 [2024-11-20 15:15:55.895618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:55.109 [2024-11-20 15:15:55.895632] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:55.109 [2024-11-20 15:15:55.895645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:55.109 [2024-11-20 15:15:55.895659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:55.109 [2024-11-20 15:15:55.895672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:55.109 [2024-11-20 15:15:55.895685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:55.109 [2024-11-20 15:15:55.895698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:55.109 [2024-11-20 15:15:55.895712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:55.109 [2024-11-20 15:15:55.895736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:55.109 [2024-11-20 15:15:55.895750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:55.109 [2024-11-20 15:15:55.895763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:55.109 [2024-11-20 15:15:55.895776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:55.109 [2024-11-20 15:15:55.895789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:55.109 [2024-11-20 15:15:55.895802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:55.109 [2024-11-20 15:15:55.895815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:55.109 [2024-11-20 15:15:55.895827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:55.109 [2024-11-20 15:15:55.895858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:55.109 [2024-11-20 15:15:55.895871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:55.109 [2024-11-20 15:15:55.895884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:55.109 [2024-11-20 15:15:55.895896] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:55.109 [2024-11-20 15:15:55.895909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:55.109 [2024-11-20 15:15:55.895922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:55.109 [2024-11-20 15:15:55.895935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:55.109 [2024-11-20 15:15:55.895948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:55.109 [2024-11-20 15:15:55.895973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:55.109 [2024-11-20 15:15:55.895986] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:55.109 [2024-11-20 15:15:55.895998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:55.109 [2024-11-20 15:15:55.896014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:55.109 [2024-11-20 15:15:55.896026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:55.110 [2024-11-20 15:15:55.896038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:55.110 [2024-11-20 15:15:55.896050] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:55.110 [2024-11-20 15:15:55.896064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:55.110 [2024-11-20 15:15:55.896077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:55.110 [2024-11-20 15:15:55.896095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:55.110 [2024-11-20 15:15:55.896109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:55.110 [2024-11-20 15:15:55.896122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:55.110 [2024-11-20 15:15:55.896134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:55.110 [2024-11-20 15:15:55.896146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:55.110 [2024-11-20 15:15:55.896158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:55.110 [2024-11-20 15:15:55.896171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:55.110 [2024-11-20 15:15:55.896185] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:55.110 [2024-11-20 15:15:55.896201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:55.110 [2024-11-20 15:15:55.896216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:55.110 [2024-11-20 15:15:55.896229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:55.110 [2024-11-20 15:15:55.896243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:55.110 [2024-11-20 15:15:55.896257] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:55.110 [2024-11-20 15:15:55.896270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:55.110 [2024-11-20 15:15:55.896284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:55.110 [2024-11-20 15:15:55.896297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:55.110 [2024-11-20 15:15:55.896311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:55.110 [2024-11-20 15:15:55.896324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:55.110 [2024-11-20 15:15:55.896338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:55.110 [2024-11-20 15:15:55.896352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:55.110 [2024-11-20 15:15:55.896365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:55.110 [2024-11-20 15:15:55.896378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:55.110 [2024-11-20 15:15:55.896392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:55.110 [2024-11-20 15:15:55.896405] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:55.110 [2024-11-20 15:15:55.896420] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:55.110 [2024-11-20 15:15:55.896434] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:55.110 [2024-11-20 15:15:55.896451] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:55.110 [2024-11-20 15:15:55.896465] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:55.110 [2024-11-20 15:15:55.896479] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:55.110 [2024-11-20 15:15:55.896493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.110 [2024-11-20 15:15:55.896506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:55.110 [2024-11-20 15:15:55.896525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.136 ms 00:22:55.110 [2024-11-20 15:15:55.896539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.368 [2024-11-20 15:15:55.948493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.368 [2024-11-20 15:15:55.948594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:55.368 [2024-11-20 15:15:55.948620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.959 ms 00:22:55.368 [2024-11-20 15:15:55.948638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.368 [2024-11-20 15:15:55.948960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.368 [2024-11-20 15:15:55.948982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:55.369 [2024-11-20 15:15:55.949000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:22:55.369 [2024-11-20 15:15:55.949016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.369 [2024-11-20 15:15:56.019299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.369 [2024-11-20 15:15:56.019390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:55.369 [2024-11-20 15:15:56.019416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.357 ms 00:22:55.369 [2024-11-20 15:15:56.019429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.369 [2024-11-20 15:15:56.019606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.369 [2024-11-20 15:15:56.019621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:55.369 [2024-11-20 15:15:56.019634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:55.369 [2024-11-20 15:15:56.019645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.369 [2024-11-20 15:15:56.020433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.369 [2024-11-20 15:15:56.020465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:55.369 [2024-11-20 15:15:56.020478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.762 ms 00:22:55.369 [2024-11-20 15:15:56.020499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.369 [2024-11-20 15:15:56.020657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.369 [2024-11-20 15:15:56.020674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:55.369 [2024-11-20 15:15:56.020685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:22:55.369 [2024-11-20 15:15:56.020697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.369 [2024-11-20 15:15:56.046483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.369 [2024-11-20 15:15:56.046581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:55.369 [2024-11-20 15:15:56.046623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.778 ms 00:22:55.369 [2024-11-20 15:15:56.046640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.369 [2024-11-20 15:15:56.069655] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:22:55.369 [2024-11-20 15:15:56.069796] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:55.369 [2024-11-20 15:15:56.069821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.369 [2024-11-20 15:15:56.069836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:55.369 [2024-11-20 15:15:56.069854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.924 ms 00:22:55.369 [2024-11-20 15:15:56.069867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.369 [2024-11-20 15:15:56.105608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.369 [2024-11-20 15:15:56.105761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:55.369 [2024-11-20 15:15:56.105828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.589 ms 00:22:55.369 [2024-11-20 15:15:56.105841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.369 [2024-11-20 15:15:56.130469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.369 [2024-11-20 15:15:56.130571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:55.369 [2024-11-20 15:15:56.130592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.433 ms 00:22:55.369 [2024-11-20 15:15:56.130605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.369 [2024-11-20 15:15:56.151731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.369 [2024-11-20 15:15:56.151849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:55.369 [2024-11-20 15:15:56.151873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.917 ms 00:22:55.369 [2024-11-20 15:15:56.151889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.369 [2024-11-20 15:15:56.153137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.369 [2024-11-20 15:15:56.153392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:55.369 [2024-11-20 15:15:56.153442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.949 ms 00:22:55.369 [2024-11-20 15:15:56.153459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.629 [2024-11-20 15:15:56.258190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.629 [2024-11-20 15:15:56.258270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:55.629 [2024-11-20 15:15:56.258292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 104.778 ms 00:22:55.629 [2024-11-20 15:15:56.258305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.629 [2024-11-20 15:15:56.276574] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:55.629 [2024-11-20 15:15:56.305392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.629 [2024-11-20 15:15:56.305499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:55.629 [2024-11-20 15:15:56.305521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.928 ms 00:22:55.629 [2024-11-20 15:15:56.305535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.629 [2024-11-20 15:15:56.305768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.629 [2024-11-20 15:15:56.305787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:55.629 [2024-11-20 15:15:56.305802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:55.629 [2024-11-20 15:15:56.305813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.629 [2024-11-20 15:15:56.305895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.629 [2024-11-20 15:15:56.305909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:55.629 [2024-11-20 15:15:56.305921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:22:55.629 [2024-11-20 15:15:56.305934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.629 [2024-11-20 15:15:56.305985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.629 [2024-11-20 15:15:56.306006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:55.629 [2024-11-20 15:15:56.306018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:22:55.629 [2024-11-20 15:15:56.306029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.629 [2024-11-20 15:15:56.306077] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:55.629 [2024-11-20 15:15:56.306091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.629 [2024-11-20 15:15:56.306103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:55.629 [2024-11-20 15:15:56.306115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:22:55.629 [2024-11-20 15:15:56.306126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.629 [2024-11-20 15:15:56.352770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.629 [2024-11-20 15:15:56.353105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:55.629 [2024-11-20 15:15:56.353139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.686 ms 00:22:55.629 [2024-11-20 15:15:56.353152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.629 [2024-11-20 15:15:56.353417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.629 [2024-11-20 15:15:56.353434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:55.629 [2024-11-20 15:15:56.353448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:22:55.629 [2024-11-20 15:15:56.353459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.629 [2024-11-20 15:15:56.355060] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:55.629 [2024-11-20 15:15:56.361880] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 513.925 ms, result 0 00:22:55.629 [2024-11-20 15:15:56.363039] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:55.629 [2024-11-20 15:15:56.384978] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:56.566  [2024-11-20T15:15:58.779Z] Copying: 27/256 [MB] (27 MBps) [2024-11-20T15:15:59.717Z] Copying: 54/256 [MB] (26 MBps) [2024-11-20T15:16:00.675Z] Copying: 81/256 [MB] (26 MBps) [2024-11-20T15:16:01.613Z] Copying: 108/256 [MB] (26 MBps) [2024-11-20T15:16:02.548Z] Copying: 135/256 [MB] (27 MBps) [2024-11-20T15:16:03.497Z] Copying: 162/256 [MB] (26 MBps) [2024-11-20T15:16:04.433Z] Copying: 188/256 [MB] (26 MBps) [2024-11-20T15:16:05.811Z] Copying: 214/256 [MB] (26 MBps) [2024-11-20T15:16:06.071Z] Copying: 241/256 [MB] (26 MBps) [2024-11-20T15:16:06.071Z] Copying: 256/256 [MB] (average 26 MBps)[2024-11-20 15:16:05.944674] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:05.235 [2024-11-20 15:16:05.961963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.235 [2024-11-20 15:16:05.962435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:05.235 [2024-11-20 15:16:05.962495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:05.235 [2024-11-20 15:16:05.962529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.235 [2024-11-20 15:16:05.962631] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:05.235 [2024-11-20 15:16:05.967695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.235 [2024-11-20 15:16:05.967811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:05.235 [2024-11-20 15:16:05.967831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.032 ms 00:23:05.235 [2024-11-20 15:16:05.967844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.235 [2024-11-20 15:16:05.970301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.235 [2024-11-20 15:16:05.970370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:05.235 [2024-11-20 15:16:05.970390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.392 ms 00:23:05.235 [2024-11-20 15:16:05.970403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.235 [2024-11-20 15:16:05.979027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.235 [2024-11-20 15:16:05.979148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:05.235 [2024-11-20 15:16:05.979167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.605 ms 00:23:05.235 [2024-11-20 15:16:05.979179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.235 [2024-11-20 15:16:05.985471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.235 [2024-11-20 15:16:05.985826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:05.235 [2024-11-20 15:16:05.985865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.227 ms 00:23:05.235 [2024-11-20 15:16:05.985877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.235 [2024-11-20 15:16:06.031528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.235 [2024-11-20 15:16:06.031648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:05.235 [2024-11-20 15:16:06.031671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.567 ms 00:23:05.235 [2024-11-20 15:16:06.031683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.235 [2024-11-20 15:16:06.057541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.235 [2024-11-20 15:16:06.057942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:05.235 [2024-11-20 15:16:06.057986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.733 ms 00:23:05.235 [2024-11-20 15:16:06.057999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.235 [2024-11-20 15:16:06.058239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.235 [2024-11-20 15:16:06.058255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:05.235 [2024-11-20 15:16:06.058269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:23:05.235 [2024-11-20 15:16:06.058280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.496 [2024-11-20 15:16:06.103861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.496 [2024-11-20 15:16:06.103967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:05.496 [2024-11-20 15:16:06.103989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.624 ms 00:23:05.496 [2024-11-20 15:16:06.104000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.496 [2024-11-20 15:16:06.150498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.496 [2024-11-20 15:16:06.150600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:05.496 [2024-11-20 15:16:06.150621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.427 ms 00:23:05.496 [2024-11-20 15:16:06.150634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.496 [2024-11-20 15:16:06.195510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.496 [2024-11-20 15:16:06.195610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:05.496 [2024-11-20 15:16:06.195631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.762 ms 00:23:05.496 [2024-11-20 15:16:06.195642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.496 [2024-11-20 15:16:06.242289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.496 [2024-11-20 15:16:06.242391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:05.496 [2024-11-20 15:16:06.242413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.497 ms 00:23:05.496 [2024-11-20 15:16:06.242425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.496 [2024-11-20 15:16:06.242562] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:05.496 [2024-11-20 15:16:06.242588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.242603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.242616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.242629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.242642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.242654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.242666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.242679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.242692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.242704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.242717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.242757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.242769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.242798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.242810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.242823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.242835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.242847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.242860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.242873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.242885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.242898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.242909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.242921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.242932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.242944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.242961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.242974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.242987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.243002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.243014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.243026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.243038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.243050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.243065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.243078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.243091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.243104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.243116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.243129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.243141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.243154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.243166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.243178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.243190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.243202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.243214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.243226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.243238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.243249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.243261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.243273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.243284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.243296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.243307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.243318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.243330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.243341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.243353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.243364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.243376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.243387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.243399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:05.496 [2024-11-20 15:16:06.243411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:05.497 [2024-11-20 15:16:06.243422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:05.497 [2024-11-20 15:16:06.243434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:05.497 [2024-11-20 15:16:06.243447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:05.497 [2024-11-20 15:16:06.243459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:05.497 [2024-11-20 15:16:06.243470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:05.497 [2024-11-20 15:16:06.243482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:05.497 [2024-11-20 15:16:06.243494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:05.497 [2024-11-20 15:16:06.243506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:05.497 [2024-11-20 15:16:06.243519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:05.497 [2024-11-20 15:16:06.243532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:05.497 [2024-11-20 15:16:06.243544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:05.497 [2024-11-20 15:16:06.243556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:05.497 [2024-11-20 15:16:06.243568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:05.497 [2024-11-20 15:16:06.243581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:05.497 [2024-11-20 15:16:06.243592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:05.497 [2024-11-20 15:16:06.243604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:05.497 [2024-11-20 15:16:06.243616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:05.497 [2024-11-20 15:16:06.243627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:05.497 [2024-11-20 15:16:06.243639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:05.497 [2024-11-20 15:16:06.243650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:05.497 [2024-11-20 15:16:06.243661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:05.497 [2024-11-20 15:16:06.243672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:05.497 [2024-11-20 15:16:06.243683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:05.497 [2024-11-20 15:16:06.243695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:05.497 [2024-11-20 15:16:06.243707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:05.497 [2024-11-20 15:16:06.243718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:05.497 [2024-11-20 15:16:06.243730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:05.497 [2024-11-20 15:16:06.243751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:05.497 [2024-11-20 15:16:06.243763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:05.497 [2024-11-20 15:16:06.243776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:05.497 [2024-11-20 15:16:06.243788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:05.497 [2024-11-20 15:16:06.243801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:05.497 [2024-11-20 15:16:06.243833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:05.497 [2024-11-20 15:16:06.243845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:05.497 [2024-11-20 15:16:06.243860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:05.497 [2024-11-20 15:16:06.243872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:05.497 [2024-11-20 15:16:06.243894] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:05.497 [2024-11-20 15:16:06.243906] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9ea78f07-8cfa-4fd3-80ff-f63c8fb0b0f1 00:23:05.497 [2024-11-20 15:16:06.243919] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:05.497 [2024-11-20 15:16:06.243931] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:05.497 [2024-11-20 15:16:06.243942] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:05.497 [2024-11-20 15:16:06.243954] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:05.497 [2024-11-20 15:16:06.243967] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:05.497 [2024-11-20 15:16:06.243979] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:05.497 [2024-11-20 15:16:06.243991] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:05.497 [2024-11-20 15:16:06.244001] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:05.497 [2024-11-20 15:16:06.244011] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:05.497 [2024-11-20 15:16:06.244023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.497 [2024-11-20 15:16:06.244042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:05.497 [2024-11-20 15:16:06.244055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.465 ms 00:23:05.497 [2024-11-20 15:16:06.244067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.497 [2024-11-20 15:16:06.267346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.497 [2024-11-20 15:16:06.267435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:05.497 [2024-11-20 15:16:06.267455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.273 ms 00:23:05.497 [2024-11-20 15:16:06.267467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.497 [2024-11-20 15:16:06.268208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.497 [2024-11-20 15:16:06.268233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:05.497 [2024-11-20 15:16:06.268247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.638 ms 00:23:05.497 [2024-11-20 15:16:06.268259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.820 [2024-11-20 15:16:06.332127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:05.820 [2024-11-20 15:16:06.332220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:05.820 [2024-11-20 15:16:06.332240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:05.820 [2024-11-20 15:16:06.332251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.820 [2024-11-20 15:16:06.332459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:05.820 [2024-11-20 15:16:06.332473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:05.820 [2024-11-20 15:16:06.332487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:05.820 [2024-11-20 15:16:06.332499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.820 [2024-11-20 15:16:06.332573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:05.820 [2024-11-20 15:16:06.332589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:05.820 [2024-11-20 15:16:06.332601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:05.820 [2024-11-20 15:16:06.332614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.820 [2024-11-20 15:16:06.332637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:05.820 [2024-11-20 15:16:06.332654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:05.820 [2024-11-20 15:16:06.332666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:05.820 [2024-11-20 15:16:06.332677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.820 [2024-11-20 15:16:06.478318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:05.820 [2024-11-20 15:16:06.478418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:05.820 [2024-11-20 15:16:06.478440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:05.820 [2024-11-20 15:16:06.478452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.820 [2024-11-20 15:16:06.599464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:05.820 [2024-11-20 15:16:06.599543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:05.820 [2024-11-20 15:16:06.599564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:05.820 [2024-11-20 15:16:06.599578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.820 [2024-11-20 15:16:06.599713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:05.820 [2024-11-20 15:16:06.599740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:05.820 [2024-11-20 15:16:06.599753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:05.820 [2024-11-20 15:16:06.599764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.820 [2024-11-20 15:16:06.599802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:05.820 [2024-11-20 15:16:06.599814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:05.820 [2024-11-20 15:16:06.599835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:05.820 [2024-11-20 15:16:06.599847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.820 [2024-11-20 15:16:06.599991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:05.820 [2024-11-20 15:16:06.600006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:05.820 [2024-11-20 15:16:06.600020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:05.820 [2024-11-20 15:16:06.600031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.820 [2024-11-20 15:16:06.600074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:05.820 [2024-11-20 15:16:06.600088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:05.820 [2024-11-20 15:16:06.600100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:05.820 [2024-11-20 15:16:06.600117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.820 [2024-11-20 15:16:06.600170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:05.820 [2024-11-20 15:16:06.600182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:05.820 [2024-11-20 15:16:06.600194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:05.820 [2024-11-20 15:16:06.600206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.820 [2024-11-20 15:16:06.600261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:05.820 [2024-11-20 15:16:06.600274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:05.820 [2024-11-20 15:16:06.600291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:05.820 [2024-11-20 15:16:06.600302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.820 [2024-11-20 15:16:06.600482] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 639.671 ms, result 0 00:23:07.197 00:23:07.197 00:23:07.197 15:16:07 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:23:07.197 15:16:07 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=78811 00:23:07.197 15:16:07 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 78811 00:23:07.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:07.197 15:16:07 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78811 ']' 00:23:07.197 15:16:07 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:07.197 15:16:07 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:07.197 15:16:07 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:07.197 15:16:07 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:07.197 15:16:07 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:23:07.455 [2024-11-20 15:16:08.073238] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:23:07.455 [2024-11-20 15:16:08.073650] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78811 ] 00:23:07.455 [2024-11-20 15:16:08.260507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.714 [2024-11-20 15:16:08.401698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:08.648 15:16:09 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:08.648 15:16:09 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:23:08.648 15:16:09 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:23:08.907 [2024-11-20 15:16:09.675429] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:08.907 [2024-11-20 15:16:09.675523] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:09.167 [2024-11-20 15:16:09.863125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.167 [2024-11-20 15:16:09.863209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:09.167 [2024-11-20 15:16:09.863235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:09.167 [2024-11-20 15:16:09.863247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.167 [2024-11-20 15:16:09.868003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.167 [2024-11-20 15:16:09.868069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:09.167 [2024-11-20 15:16:09.868089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.733 ms 00:23:09.167 [2024-11-20 15:16:09.868119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.167 [2024-11-20 15:16:09.868298] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:09.167 [2024-11-20 15:16:09.869383] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:09.167 [2024-11-20 15:16:09.869424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.167 [2024-11-20 15:16:09.869436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:09.167 [2024-11-20 15:16:09.869451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.145 ms 00:23:09.167 [2024-11-20 15:16:09.869462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.167 [2024-11-20 15:16:09.872023] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:09.167 [2024-11-20 15:16:09.893789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.167 [2024-11-20 15:16:09.893892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:09.167 [2024-11-20 15:16:09.893915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.803 ms 00:23:09.167 [2024-11-20 15:16:09.893930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.167 [2024-11-20 15:16:09.894141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.167 [2024-11-20 15:16:09.894161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:09.167 [2024-11-20 15:16:09.894174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:23:09.167 [2024-11-20 15:16:09.894189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.167 [2024-11-20 15:16:09.907405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.167 [2024-11-20 15:16:09.907481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:09.167 [2024-11-20 15:16:09.907501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.167 ms 00:23:09.167 [2024-11-20 15:16:09.907515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.167 [2024-11-20 15:16:09.907745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.167 [2024-11-20 15:16:09.907766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:09.167 [2024-11-20 15:16:09.907778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:23:09.167 [2024-11-20 15:16:09.907792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.167 [2024-11-20 15:16:09.907839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.167 [2024-11-20 15:16:09.907855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:09.167 [2024-11-20 15:16:09.907867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:23:09.167 [2024-11-20 15:16:09.907880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.167 [2024-11-20 15:16:09.907914] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:09.167 [2024-11-20 15:16:09.913808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.167 [2024-11-20 15:16:09.913850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:09.167 [2024-11-20 15:16:09.913867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.910 ms 00:23:09.167 [2024-11-20 15:16:09.913879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.167 [2024-11-20 15:16:09.913967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.167 [2024-11-20 15:16:09.913980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:09.167 [2024-11-20 15:16:09.913995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:09.167 [2024-11-20 15:16:09.914009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.167 [2024-11-20 15:16:09.914040] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:09.167 [2024-11-20 15:16:09.914080] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:09.167 [2024-11-20 15:16:09.914131] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:09.167 [2024-11-20 15:16:09.914152] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:09.167 [2024-11-20 15:16:09.914255] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:09.167 [2024-11-20 15:16:09.914270] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:09.167 [2024-11-20 15:16:09.914294] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:09.167 [2024-11-20 15:16:09.914308] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:09.167 [2024-11-20 15:16:09.914335] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:09.167 [2024-11-20 15:16:09.914348] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:09.167 [2024-11-20 15:16:09.914365] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:09.167 [2024-11-20 15:16:09.914376] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:09.167 [2024-11-20 15:16:09.914398] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:09.167 [2024-11-20 15:16:09.914409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.167 [2024-11-20 15:16:09.914426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:09.167 [2024-11-20 15:16:09.914438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.379 ms 00:23:09.167 [2024-11-20 15:16:09.914454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.167 [2024-11-20 15:16:09.914541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.167 [2024-11-20 15:16:09.914559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:09.167 [2024-11-20 15:16:09.914571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:23:09.167 [2024-11-20 15:16:09.914587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.167 [2024-11-20 15:16:09.914685] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:09.167 [2024-11-20 15:16:09.914703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:09.167 [2024-11-20 15:16:09.914715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:09.167 [2024-11-20 15:16:09.914768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:09.167 [2024-11-20 15:16:09.914780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:09.167 [2024-11-20 15:16:09.914795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:09.167 [2024-11-20 15:16:09.914806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:09.167 [2024-11-20 15:16:09.914829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:09.167 [2024-11-20 15:16:09.914839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:09.167 [2024-11-20 15:16:09.914854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:09.167 [2024-11-20 15:16:09.914864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:09.167 [2024-11-20 15:16:09.914880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:09.167 [2024-11-20 15:16:09.914891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:09.167 [2024-11-20 15:16:09.914906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:09.167 [2024-11-20 15:16:09.914916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:09.167 [2024-11-20 15:16:09.914932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:09.167 [2024-11-20 15:16:09.914942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:09.167 [2024-11-20 15:16:09.914956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:09.167 [2024-11-20 15:16:09.914966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:09.167 [2024-11-20 15:16:09.914981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:09.167 [2024-11-20 15:16:09.915004] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:09.167 [2024-11-20 15:16:09.915020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:09.167 [2024-11-20 15:16:09.915031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:09.167 [2024-11-20 15:16:09.915051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:09.167 [2024-11-20 15:16:09.915060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:09.167 [2024-11-20 15:16:09.915075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:09.167 [2024-11-20 15:16:09.915085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:09.167 [2024-11-20 15:16:09.915100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:09.167 [2024-11-20 15:16:09.915110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:09.167 [2024-11-20 15:16:09.915125] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:09.167 [2024-11-20 15:16:09.915135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:09.167 [2024-11-20 15:16:09.915152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:09.167 [2024-11-20 15:16:09.915161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:09.167 [2024-11-20 15:16:09.915176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:09.167 [2024-11-20 15:16:09.915185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:09.168 [2024-11-20 15:16:09.915200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:09.168 [2024-11-20 15:16:09.915210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:09.168 [2024-11-20 15:16:09.915225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:09.168 [2024-11-20 15:16:09.915234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:09.168 [2024-11-20 15:16:09.915255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:09.168 [2024-11-20 15:16:09.915265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:09.168 [2024-11-20 15:16:09.915279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:09.168 [2024-11-20 15:16:09.915289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:09.168 [2024-11-20 15:16:09.915304] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:09.168 [2024-11-20 15:16:09.915322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:09.168 [2024-11-20 15:16:09.915338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:09.168 [2024-11-20 15:16:09.915349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:09.168 [2024-11-20 15:16:09.915364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:09.168 [2024-11-20 15:16:09.915375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:09.168 [2024-11-20 15:16:09.915390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:09.168 [2024-11-20 15:16:09.915401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:09.168 [2024-11-20 15:16:09.915415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:09.168 [2024-11-20 15:16:09.915425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:09.168 [2024-11-20 15:16:09.915442] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:09.168 [2024-11-20 15:16:09.915456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:09.168 [2024-11-20 15:16:09.915480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:09.168 [2024-11-20 15:16:09.915492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:09.168 [2024-11-20 15:16:09.915508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:09.168 [2024-11-20 15:16:09.915519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:09.168 [2024-11-20 15:16:09.915535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:09.168 [2024-11-20 15:16:09.915547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:09.168 [2024-11-20 15:16:09.915563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:09.168 [2024-11-20 15:16:09.915574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:09.168 [2024-11-20 15:16:09.915591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:09.168 [2024-11-20 15:16:09.915601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:09.168 [2024-11-20 15:16:09.915617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:09.168 [2024-11-20 15:16:09.915628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:09.168 [2024-11-20 15:16:09.915644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:09.168 [2024-11-20 15:16:09.915655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:09.168 [2024-11-20 15:16:09.915671] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:09.168 [2024-11-20 15:16:09.915683] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:09.168 [2024-11-20 15:16:09.915706] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:09.168 [2024-11-20 15:16:09.915727] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:09.168 [2024-11-20 15:16:09.915744] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:09.168 [2024-11-20 15:16:09.915755] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:09.168 [2024-11-20 15:16:09.915772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.168 [2024-11-20 15:16:09.915785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:09.168 [2024-11-20 15:16:09.915801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.138 ms 00:23:09.168 [2024-11-20 15:16:09.915812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.168 [2024-11-20 15:16:09.966577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.168 [2024-11-20 15:16:09.966658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:09.168 [2024-11-20 15:16:09.966685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.755 ms 00:23:09.168 [2024-11-20 15:16:09.966703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.168 [2024-11-20 15:16:09.966977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.168 [2024-11-20 15:16:09.966993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:09.168 [2024-11-20 15:16:09.967012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:23:09.168 [2024-11-20 15:16:09.967023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.427 [2024-11-20 15:16:10.023303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.427 [2024-11-20 15:16:10.023419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:09.427 [2024-11-20 15:16:10.023446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.324 ms 00:23:09.427 [2024-11-20 15:16:10.023470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.427 [2024-11-20 15:16:10.023656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.427 [2024-11-20 15:16:10.023671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:09.427 [2024-11-20 15:16:10.023691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:09.427 [2024-11-20 15:16:10.023703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.427 [2024-11-20 15:16:10.024564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.427 [2024-11-20 15:16:10.024596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:09.427 [2024-11-20 15:16:10.024623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.811 ms 00:23:09.427 [2024-11-20 15:16:10.024636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.427 [2024-11-20 15:16:10.024839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.427 [2024-11-20 15:16:10.024866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:09.427 [2024-11-20 15:16:10.024884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.169 ms 00:23:09.427 [2024-11-20 15:16:10.024896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.427 [2024-11-20 15:16:10.055103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.427 [2024-11-20 15:16:10.055468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:09.427 [2024-11-20 15:16:10.055509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.211 ms 00:23:09.427 [2024-11-20 15:16:10.055522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.427 [2024-11-20 15:16:10.090465] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:23:09.427 [2024-11-20 15:16:10.090552] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:09.427 [2024-11-20 15:16:10.090582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.427 [2024-11-20 15:16:10.090596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:09.427 [2024-11-20 15:16:10.090619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.880 ms 00:23:09.427 [2024-11-20 15:16:10.090631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.427 [2024-11-20 15:16:10.125917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.427 [2024-11-20 15:16:10.126054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:09.427 [2024-11-20 15:16:10.126101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.094 ms 00:23:09.427 [2024-11-20 15:16:10.126114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.427 [2024-11-20 15:16:10.149340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.427 [2024-11-20 15:16:10.149751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:09.427 [2024-11-20 15:16:10.149806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.004 ms 00:23:09.427 [2024-11-20 15:16:10.149820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.427 [2024-11-20 15:16:10.173053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.427 [2024-11-20 15:16:10.173154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:09.427 [2024-11-20 15:16:10.173183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.052 ms 00:23:09.427 [2024-11-20 15:16:10.173195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.427 [2024-11-20 15:16:10.174414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.427 [2024-11-20 15:16:10.174457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:09.427 [2024-11-20 15:16:10.174476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.944 ms 00:23:09.427 [2024-11-20 15:16:10.174488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.687 [2024-11-20 15:16:10.281299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.687 [2024-11-20 15:16:10.281402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:09.687 [2024-11-20 15:16:10.281432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 106.928 ms 00:23:09.687 [2024-11-20 15:16:10.281445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.687 [2024-11-20 15:16:10.298506] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:09.687 [2024-11-20 15:16:10.327950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.687 [2024-11-20 15:16:10.328303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:09.687 [2024-11-20 15:16:10.328347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.331 ms 00:23:09.687 [2024-11-20 15:16:10.328365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.687 [2024-11-20 15:16:10.328539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.687 [2024-11-20 15:16:10.328560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:09.687 [2024-11-20 15:16:10.328574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:09.687 [2024-11-20 15:16:10.328593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.687 [2024-11-20 15:16:10.328671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.687 [2024-11-20 15:16:10.328689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:09.687 [2024-11-20 15:16:10.328701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:23:09.687 [2024-11-20 15:16:10.328753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.687 [2024-11-20 15:16:10.328786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.687 [2024-11-20 15:16:10.328803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:09.687 [2024-11-20 15:16:10.328815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:09.687 [2024-11-20 15:16:10.328831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.687 [2024-11-20 15:16:10.328883] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:09.687 [2024-11-20 15:16:10.328910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.687 [2024-11-20 15:16:10.328921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:09.687 [2024-11-20 15:16:10.328945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:23:09.687 [2024-11-20 15:16:10.328956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.687 [2024-11-20 15:16:10.370691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.687 [2024-11-20 15:16:10.370993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:09.687 [2024-11-20 15:16:10.371035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.746 ms 00:23:09.687 [2024-11-20 15:16:10.371048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.687 [2024-11-20 15:16:10.371295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.687 [2024-11-20 15:16:10.371313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:09.687 [2024-11-20 15:16:10.371331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:23:09.687 [2024-11-20 15:16:10.371350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.687 [2024-11-20 15:16:10.372957] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:09.687 [2024-11-20 15:16:10.378930] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 510.192 ms, result 0 00:23:09.687 [2024-11-20 15:16:10.380333] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:09.687 Some configs were skipped because the RPC state that can call them passed over. 00:23:09.687 15:16:10 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:23:09.946 [2024-11-20 15:16:10.664800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.946 [2024-11-20 15:16:10.665165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:23:09.946 [2024-11-20 15:16:10.665293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.671 ms 00:23:09.946 [2024-11-20 15:16:10.665347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.946 [2024-11-20 15:16:10.665499] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.376 ms, result 0 00:23:09.946 true 00:23:09.946 15:16:10 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:23:10.277 [2024-11-20 15:16:10.904367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.277 [2024-11-20 15:16:10.904663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:23:10.277 [2024-11-20 15:16:10.904707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.338 ms 00:23:10.277 [2024-11-20 15:16:10.904737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.277 [2024-11-20 15:16:10.904851] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.821 ms, result 0 00:23:10.277 true 00:23:10.277 15:16:10 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 78811 00:23:10.277 15:16:10 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78811 ']' 00:23:10.277 15:16:10 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78811 00:23:10.277 15:16:10 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:23:10.277 15:16:10 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:10.277 15:16:10 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78811 00:23:10.277 15:16:10 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:10.277 15:16:10 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:10.277 15:16:10 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78811' 00:23:10.277 killing process with pid 78811 00:23:10.277 15:16:10 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78811 00:23:10.277 15:16:10 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78811 00:23:11.657 [2024-11-20 15:16:12.211432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.657 [2024-11-20 15:16:12.211786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:11.657 [2024-11-20 15:16:12.211894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:11.657 [2024-11-20 15:16:12.211939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.657 [2024-11-20 15:16:12.211990] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:11.657 [2024-11-20 15:16:12.216903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.657 [2024-11-20 15:16:12.216962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:11.657 [2024-11-20 15:16:12.216989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.884 ms 00:23:11.657 [2024-11-20 15:16:12.217001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.657 [2024-11-20 15:16:12.217348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.657 [2024-11-20 15:16:12.217365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:11.657 [2024-11-20 15:16:12.217381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.240 ms 00:23:11.657 [2024-11-20 15:16:12.217393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.657 [2024-11-20 15:16:12.221077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.657 [2024-11-20 15:16:12.221300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:11.657 [2024-11-20 15:16:12.221339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.655 ms 00:23:11.657 [2024-11-20 15:16:12.221352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.657 [2024-11-20 15:16:12.227604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.657 [2024-11-20 15:16:12.227680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:11.657 [2024-11-20 15:16:12.227701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.160 ms 00:23:11.657 [2024-11-20 15:16:12.227713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.657 [2024-11-20 15:16:12.245461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.657 [2024-11-20 15:16:12.245562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:11.657 [2024-11-20 15:16:12.245601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.594 ms 00:23:11.657 [2024-11-20 15:16:12.245632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.657 [2024-11-20 15:16:12.257772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.657 [2024-11-20 15:16:12.258125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:11.657 [2024-11-20 15:16:12.258167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.998 ms 00:23:11.657 [2024-11-20 15:16:12.258180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.657 [2024-11-20 15:16:12.258392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.657 [2024-11-20 15:16:12.258408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:11.657 [2024-11-20 15:16:12.258424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:23:11.657 [2024-11-20 15:16:12.258436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.657 [2024-11-20 15:16:12.276301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.657 [2024-11-20 15:16:12.276392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:11.657 [2024-11-20 15:16:12.276418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.843 ms 00:23:11.657 [2024-11-20 15:16:12.276446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.657 [2024-11-20 15:16:12.294308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.657 [2024-11-20 15:16:12.294405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:11.657 [2024-11-20 15:16:12.294463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.727 ms 00:23:11.657 [2024-11-20 15:16:12.294475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.657 [2024-11-20 15:16:12.312376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.657 [2024-11-20 15:16:12.312474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:11.657 [2024-11-20 15:16:12.312503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.781 ms 00:23:11.657 [2024-11-20 15:16:12.312531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.657 [2024-11-20 15:16:12.329751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.657 [2024-11-20 15:16:12.329849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:11.657 [2024-11-20 15:16:12.329878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.032 ms 00:23:11.657 [2024-11-20 15:16:12.329889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.657 [2024-11-20 15:16:12.330002] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:11.657 [2024-11-20 15:16:12.330027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:11.657 [2024-11-20 15:16:12.330049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:11.657 [2024-11-20 15:16:12.330062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:11.657 [2024-11-20 15:16:12.330081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:11.657 [2024-11-20 15:16:12.330093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:11.657 [2024-11-20 15:16:12.330120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:11.657 [2024-11-20 15:16:12.330132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.330151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.330163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.330181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.330193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.330212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.330224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.330242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.330254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.330272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.330284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.330306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.330318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.330337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.330349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.330372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.330385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.330403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.330416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.330434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.330448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.330466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.330478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.330497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.330510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.330530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.330543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.330560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.330572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.330590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.330603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.330626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.330639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.330656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.330669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.330686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.330699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.330746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.330760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.330779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.330792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.330810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.330823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.330840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.330852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.330883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.330910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.330933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.330945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.330965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.330977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.330997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.331008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.331025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.331037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.331054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.331067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.331086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.331098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.331116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.331128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.331144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.331156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.331180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.331192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.331209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.331220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.331237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.331248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.331265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.331276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.331292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.331303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.331337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.331349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.331366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.331377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.331395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.331407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.331429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.331442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.331459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.331471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.331489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.331501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.331518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.331530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.331548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.331561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:11.658 [2024-11-20 15:16:12.331579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:11.659 [2024-11-20 15:16:12.331591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:11.659 [2024-11-20 15:16:12.331611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:11.659 [2024-11-20 15:16:12.331623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:11.659 [2024-11-20 15:16:12.331642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:11.659 [2024-11-20 15:16:12.331665] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:11.659 [2024-11-20 15:16:12.331696] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9ea78f07-8cfa-4fd3-80ff-f63c8fb0b0f1 00:23:11.659 [2024-11-20 15:16:12.331738] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:11.659 [2024-11-20 15:16:12.331765] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:11.659 [2024-11-20 15:16:12.331777] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:11.659 [2024-11-20 15:16:12.331796] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:11.659 [2024-11-20 15:16:12.331808] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:11.659 [2024-11-20 15:16:12.331826] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:11.659 [2024-11-20 15:16:12.331837] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:11.659 [2024-11-20 15:16:12.331851] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:11.659 [2024-11-20 15:16:12.331861] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:11.659 [2024-11-20 15:16:12.331876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.659 [2024-11-20 15:16:12.331899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:11.659 [2024-11-20 15:16:12.331915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.885 ms 00:23:11.659 [2024-11-20 15:16:12.331926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.659 [2024-11-20 15:16:12.355687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.659 [2024-11-20 15:16:12.355794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:11.659 [2024-11-20 15:16:12.355825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.738 ms 00:23:11.659 [2024-11-20 15:16:12.355836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.659 [2024-11-20 15:16:12.356517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.659 [2024-11-20 15:16:12.356542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:11.659 [2024-11-20 15:16:12.356558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.546 ms 00:23:11.659 [2024-11-20 15:16:12.356574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.659 [2024-11-20 15:16:12.436259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.659 [2024-11-20 15:16:12.436380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:11.659 [2024-11-20 15:16:12.436409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.659 [2024-11-20 15:16:12.436423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.659 [2024-11-20 15:16:12.436649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.659 [2024-11-20 15:16:12.436665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:11.659 [2024-11-20 15:16:12.436685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.659 [2024-11-20 15:16:12.436704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.659 [2024-11-20 15:16:12.436823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.659 [2024-11-20 15:16:12.436840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:11.659 [2024-11-20 15:16:12.436866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.659 [2024-11-20 15:16:12.436879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.659 [2024-11-20 15:16:12.436911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.659 [2024-11-20 15:16:12.436924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:11.659 [2024-11-20 15:16:12.436943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.659 [2024-11-20 15:16:12.436955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.919 [2024-11-20 15:16:12.578495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.919 [2024-11-20 15:16:12.578842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:11.919 [2024-11-20 15:16:12.578885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.919 [2024-11-20 15:16:12.578899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.919 [2024-11-20 15:16:12.697916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.919 [2024-11-20 15:16:12.698249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:11.919 [2024-11-20 15:16:12.698293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.919 [2024-11-20 15:16:12.698316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.919 [2024-11-20 15:16:12.698491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.919 [2024-11-20 15:16:12.698507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:11.919 [2024-11-20 15:16:12.698534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.919 [2024-11-20 15:16:12.698546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.919 [2024-11-20 15:16:12.698591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.919 [2024-11-20 15:16:12.698604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:11.919 [2024-11-20 15:16:12.698624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.919 [2024-11-20 15:16:12.698636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.919 [2024-11-20 15:16:12.698856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.919 [2024-11-20 15:16:12.698873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:11.919 [2024-11-20 15:16:12.698892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.919 [2024-11-20 15:16:12.698903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.919 [2024-11-20 15:16:12.698962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.919 [2024-11-20 15:16:12.698977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:11.919 [2024-11-20 15:16:12.698994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.919 [2024-11-20 15:16:12.699006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.919 [2024-11-20 15:16:12.699070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.919 [2024-11-20 15:16:12.699083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:11.919 [2024-11-20 15:16:12.699106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.919 [2024-11-20 15:16:12.699118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.919 [2024-11-20 15:16:12.699178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.919 [2024-11-20 15:16:12.699191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:11.919 [2024-11-20 15:16:12.699209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.919 [2024-11-20 15:16:12.699220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.919 [2024-11-20 15:16:12.699409] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 488.724 ms, result 0 00:23:13.298 15:16:13 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:23:13.298 15:16:13 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:13.298 [2024-11-20 15:16:13.972307] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:23:13.298 [2024-11-20 15:16:13.972483] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78886 ] 00:23:13.557 [2024-11-20 15:16:14.161684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.557 [2024-11-20 15:16:14.313259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:14.126 [2024-11-20 15:16:14.734771] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:14.126 [2024-11-20 15:16:14.735184] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:14.126 [2024-11-20 15:16:14.903812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.126 [2024-11-20 15:16:14.903898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:14.126 [2024-11-20 15:16:14.903917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:14.126 [2024-11-20 15:16:14.903930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.126 [2024-11-20 15:16:14.907682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.126 [2024-11-20 15:16:14.907762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:14.126 [2024-11-20 15:16:14.907779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.730 ms 00:23:14.126 [2024-11-20 15:16:14.907790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.126 [2024-11-20 15:16:14.907948] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:14.126 [2024-11-20 15:16:14.909063] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:14.126 [2024-11-20 15:16:14.909101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.126 [2024-11-20 15:16:14.909115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:14.126 [2024-11-20 15:16:14.909127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.169 ms 00:23:14.126 [2024-11-20 15:16:14.909138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.126 [2024-11-20 15:16:14.911666] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:14.126 [2024-11-20 15:16:14.934015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.127 [2024-11-20 15:16:14.934405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:14.127 [2024-11-20 15:16:14.934439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.378 ms 00:23:14.127 [2024-11-20 15:16:14.934453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.127 [2024-11-20 15:16:14.934707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.127 [2024-11-20 15:16:14.934767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:14.127 [2024-11-20 15:16:14.934799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:23:14.127 [2024-11-20 15:16:14.934812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.127 [2024-11-20 15:16:14.949046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.127 [2024-11-20 15:16:14.949399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:14.127 [2024-11-20 15:16:14.949432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.189 ms 00:23:14.127 [2024-11-20 15:16:14.949447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.127 [2024-11-20 15:16:14.949684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.127 [2024-11-20 15:16:14.949703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:14.127 [2024-11-20 15:16:14.949717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:23:14.127 [2024-11-20 15:16:14.949758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.127 [2024-11-20 15:16:14.949818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.127 [2024-11-20 15:16:14.949838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:14.127 [2024-11-20 15:16:14.949852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:14.127 [2024-11-20 15:16:14.949865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.127 [2024-11-20 15:16:14.949900] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:14.127 [2024-11-20 15:16:14.955913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.127 [2024-11-20 15:16:14.956134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:14.127 [2024-11-20 15:16:14.956165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.034 ms 00:23:14.127 [2024-11-20 15:16:14.956177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.127 [2024-11-20 15:16:14.956282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.127 [2024-11-20 15:16:14.956296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:14.127 [2024-11-20 15:16:14.956308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:23:14.127 [2024-11-20 15:16:14.956320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.127 [2024-11-20 15:16:14.956350] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:14.127 [2024-11-20 15:16:14.956386] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:14.127 [2024-11-20 15:16:14.956428] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:14.127 [2024-11-20 15:16:14.956449] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:14.127 [2024-11-20 15:16:14.956547] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:14.127 [2024-11-20 15:16:14.956561] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:14.127 [2024-11-20 15:16:14.956575] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:14.127 [2024-11-20 15:16:14.956589] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:14.127 [2024-11-20 15:16:14.956606] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:14.127 [2024-11-20 15:16:14.956619] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:14.127 [2024-11-20 15:16:14.956630] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:14.127 [2024-11-20 15:16:14.956641] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:14.127 [2024-11-20 15:16:14.956652] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:14.127 [2024-11-20 15:16:14.956664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.127 [2024-11-20 15:16:14.956676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:14.127 [2024-11-20 15:16:14.956687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:23:14.127 [2024-11-20 15:16:14.956698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.127 [2024-11-20 15:16:14.956796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.127 [2024-11-20 15:16:14.956813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:14.127 [2024-11-20 15:16:14.956825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:23:14.127 [2024-11-20 15:16:14.956836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.127 [2024-11-20 15:16:14.956937] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:14.127 [2024-11-20 15:16:14.956952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:14.127 [2024-11-20 15:16:14.956963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:14.127 [2024-11-20 15:16:14.956974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:14.127 [2024-11-20 15:16:14.956985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:14.127 [2024-11-20 15:16:14.956995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:14.127 [2024-11-20 15:16:14.957005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:14.127 [2024-11-20 15:16:14.957015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:14.127 [2024-11-20 15:16:14.957026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:14.127 [2024-11-20 15:16:14.957035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:14.127 [2024-11-20 15:16:14.957046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:14.127 [2024-11-20 15:16:14.957056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:14.127 [2024-11-20 15:16:14.957065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:14.127 [2024-11-20 15:16:14.957088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:14.127 [2024-11-20 15:16:14.957098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:14.127 [2024-11-20 15:16:14.957108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:14.127 [2024-11-20 15:16:14.957118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:14.127 [2024-11-20 15:16:14.957128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:14.127 [2024-11-20 15:16:14.957138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:14.127 [2024-11-20 15:16:14.957148] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:14.127 [2024-11-20 15:16:14.957158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:14.127 [2024-11-20 15:16:14.957168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:14.127 [2024-11-20 15:16:14.957177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:14.127 [2024-11-20 15:16:14.957187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:14.127 [2024-11-20 15:16:14.957196] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:14.127 [2024-11-20 15:16:14.957205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:14.127 [2024-11-20 15:16:14.957215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:14.127 [2024-11-20 15:16:14.957225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:14.127 [2024-11-20 15:16:14.957234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:14.127 [2024-11-20 15:16:14.957244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:14.127 [2024-11-20 15:16:14.957254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:14.127 [2024-11-20 15:16:14.957263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:14.127 [2024-11-20 15:16:14.957272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:14.127 [2024-11-20 15:16:14.957282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:14.127 [2024-11-20 15:16:14.957291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:14.127 [2024-11-20 15:16:14.957300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:14.127 [2024-11-20 15:16:14.957310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:14.127 [2024-11-20 15:16:14.957319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:14.127 [2024-11-20 15:16:14.957328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:14.127 [2024-11-20 15:16:14.957337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:14.127 [2024-11-20 15:16:14.957346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:14.127 [2024-11-20 15:16:14.957356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:14.127 [2024-11-20 15:16:14.957367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:14.127 [2024-11-20 15:16:14.957378] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:14.127 [2024-11-20 15:16:14.957388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:14.127 [2024-11-20 15:16:14.957399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:14.127 [2024-11-20 15:16:14.957413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:14.127 [2024-11-20 15:16:14.957424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:14.127 [2024-11-20 15:16:14.957433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:14.127 [2024-11-20 15:16:14.957443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:14.127 [2024-11-20 15:16:14.957453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:14.127 [2024-11-20 15:16:14.957462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:14.127 [2024-11-20 15:16:14.957472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:14.127 [2024-11-20 15:16:14.957483] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:14.127 [2024-11-20 15:16:14.957497] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:14.128 [2024-11-20 15:16:14.957510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:14.128 [2024-11-20 15:16:14.957522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:14.128 [2024-11-20 15:16:14.957533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:14.128 [2024-11-20 15:16:14.957544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:14.128 [2024-11-20 15:16:14.957554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:14.128 [2024-11-20 15:16:14.957565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:14.128 [2024-11-20 15:16:14.957575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:14.128 [2024-11-20 15:16:14.957595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:14.128 [2024-11-20 15:16:14.957606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:14.128 [2024-11-20 15:16:14.957617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:14.128 [2024-11-20 15:16:14.957628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:14.128 [2024-11-20 15:16:14.957639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:14.128 [2024-11-20 15:16:14.957650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:14.128 [2024-11-20 15:16:14.957662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:14.128 [2024-11-20 15:16:14.957673] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:14.128 [2024-11-20 15:16:14.957685] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:14.128 [2024-11-20 15:16:14.957698] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:14.128 [2024-11-20 15:16:14.957710] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:14.128 [2024-11-20 15:16:14.957731] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:14.128 [2024-11-20 15:16:14.957744] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:14.128 [2024-11-20 15:16:14.957756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.128 [2024-11-20 15:16:14.957768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:14.128 [2024-11-20 15:16:14.957785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.875 ms 00:23:14.128 [2024-11-20 15:16:14.957795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.388 [2024-11-20 15:16:15.008135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.388 [2024-11-20 15:16:15.008222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:14.388 [2024-11-20 15:16:15.008243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.347 ms 00:23:14.388 [2024-11-20 15:16:15.008256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.388 [2024-11-20 15:16:15.008515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.388 [2024-11-20 15:16:15.008531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:14.388 [2024-11-20 15:16:15.008545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:23:14.388 [2024-11-20 15:16:15.008557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.388 [2024-11-20 15:16:15.082937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.388 [2024-11-20 15:16:15.083258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:14.388 [2024-11-20 15:16:15.083296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.466 ms 00:23:14.388 [2024-11-20 15:16:15.083308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.388 [2024-11-20 15:16:15.083481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.388 [2024-11-20 15:16:15.083495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:14.388 [2024-11-20 15:16:15.083508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:14.388 [2024-11-20 15:16:15.083519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.388 [2024-11-20 15:16:15.084273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.388 [2024-11-20 15:16:15.084291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:14.388 [2024-11-20 15:16:15.084304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.731 ms 00:23:14.388 [2024-11-20 15:16:15.084324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.388 [2024-11-20 15:16:15.084473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.388 [2024-11-20 15:16:15.084489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:14.388 [2024-11-20 15:16:15.084500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:23:14.388 [2024-11-20 15:16:15.084511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.388 [2024-11-20 15:16:15.109131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.388 [2024-11-20 15:16:15.109224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:14.388 [2024-11-20 15:16:15.109244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.629 ms 00:23:14.388 [2024-11-20 15:16:15.109257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.388 [2024-11-20 15:16:15.132534] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:23:14.388 [2024-11-20 15:16:15.132639] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:14.388 [2024-11-20 15:16:15.132662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.388 [2024-11-20 15:16:15.132675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:14.388 [2024-11-20 15:16:15.132692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.203 ms 00:23:14.388 [2024-11-20 15:16:15.132703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.388 [2024-11-20 15:16:15.166482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.388 [2024-11-20 15:16:15.166635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:14.388 [2024-11-20 15:16:15.166667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.572 ms 00:23:14.388 [2024-11-20 15:16:15.166680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.388 [2024-11-20 15:16:15.188356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.388 [2024-11-20 15:16:15.188742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:14.388 [2024-11-20 15:16:15.188775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.471 ms 00:23:14.388 [2024-11-20 15:16:15.188787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.388 [2024-11-20 15:16:15.210880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.388 [2024-11-20 15:16:15.211254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:14.388 [2024-11-20 15:16:15.211284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.908 ms 00:23:14.388 [2024-11-20 15:16:15.211298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.388 [2024-11-20 15:16:15.212303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.388 [2024-11-20 15:16:15.212347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:14.388 [2024-11-20 15:16:15.212363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.720 ms 00:23:14.388 [2024-11-20 15:16:15.212375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.648 [2024-11-20 15:16:15.317294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.648 [2024-11-20 15:16:15.317683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:14.648 [2024-11-20 15:16:15.317758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 105.039 ms 00:23:14.648 [2024-11-20 15:16:15.317773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.648 [2024-11-20 15:16:15.335930] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:14.648 [2024-11-20 15:16:15.364601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.648 [2024-11-20 15:16:15.364694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:14.648 [2024-11-20 15:16:15.364715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.655 ms 00:23:14.648 [2024-11-20 15:16:15.364752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.648 [2024-11-20 15:16:15.364931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.648 [2024-11-20 15:16:15.364948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:14.648 [2024-11-20 15:16:15.364961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:14.648 [2024-11-20 15:16:15.364971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.648 [2024-11-20 15:16:15.365048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.648 [2024-11-20 15:16:15.365061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:14.648 [2024-11-20 15:16:15.365073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:23:14.648 [2024-11-20 15:16:15.365084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.648 [2024-11-20 15:16:15.365133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.648 [2024-11-20 15:16:15.365147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:14.648 [2024-11-20 15:16:15.365158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:23:14.648 [2024-11-20 15:16:15.365169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.648 [2024-11-20 15:16:15.365217] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:14.649 [2024-11-20 15:16:15.365248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.649 [2024-11-20 15:16:15.365260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:14.649 [2024-11-20 15:16:15.365285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:23:14.649 [2024-11-20 15:16:15.365295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.649 [2024-11-20 15:16:15.409508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.649 [2024-11-20 15:16:15.409637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:14.649 [2024-11-20 15:16:15.409659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.251 ms 00:23:14.649 [2024-11-20 15:16:15.409672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.649 [2024-11-20 15:16:15.410203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.649 [2024-11-20 15:16:15.410233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:14.649 [2024-11-20 15:16:15.410248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:23:14.649 [2024-11-20 15:16:15.410260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.649 [2024-11-20 15:16:15.411713] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:14.649 [2024-11-20 15:16:15.418323] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 508.317 ms, result 0 00:23:14.649 [2024-11-20 15:16:15.419697] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:14.649 [2024-11-20 15:16:15.440937] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:15.659  [2024-11-20T15:16:17.451Z] Copying: 29/256 [MB] (29 MBps) [2024-11-20T15:16:18.836Z] Copying: 57/256 [MB] (27 MBps) [2024-11-20T15:16:19.775Z] Copying: 85/256 [MB] (27 MBps) [2024-11-20T15:16:20.707Z] Copying: 114/256 [MB] (28 MBps) [2024-11-20T15:16:21.641Z] Copying: 140/256 [MB] (26 MBps) [2024-11-20T15:16:22.577Z] Copying: 166/256 [MB] (26 MBps) [2024-11-20T15:16:23.514Z] Copying: 194/256 [MB] (27 MBps) [2024-11-20T15:16:24.452Z] Copying: 221/256 [MB] (27 MBps) [2024-11-20T15:16:25.020Z] Copying: 248/256 [MB] (26 MBps) [2024-11-20T15:16:25.020Z] Copying: 256/256 [MB] (average 27 MBps)[2024-11-20 15:16:24.729340] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:24.184 [2024-11-20 15:16:24.746514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.184 [2024-11-20 15:16:24.746619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:24.184 [2024-11-20 15:16:24.746641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:24.184 [2024-11-20 15:16:24.746684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.184 [2024-11-20 15:16:24.746740] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:24.184 [2024-11-20 15:16:24.751585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.184 [2024-11-20 15:16:24.751668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:24.184 [2024-11-20 15:16:24.751688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.809 ms 00:23:24.184 [2024-11-20 15:16:24.751700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.184 [2024-11-20 15:16:24.752050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.184 [2024-11-20 15:16:24.752072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:24.184 [2024-11-20 15:16:24.752086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:23:24.184 [2024-11-20 15:16:24.752099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.184 [2024-11-20 15:16:24.755240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.184 [2024-11-20 15:16:24.755298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:24.184 [2024-11-20 15:16:24.755312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.127 ms 00:23:24.184 [2024-11-20 15:16:24.755323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.184 [2024-11-20 15:16:24.761369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.184 [2024-11-20 15:16:24.761442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:24.185 [2024-11-20 15:16:24.761457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.003 ms 00:23:24.185 [2024-11-20 15:16:24.761471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.185 [2024-11-20 15:16:24.808370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.185 [2024-11-20 15:16:24.808764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:24.185 [2024-11-20 15:16:24.808797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.830 ms 00:23:24.185 [2024-11-20 15:16:24.808810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.185 [2024-11-20 15:16:24.834427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.185 [2024-11-20 15:16:24.834553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:24.185 [2024-11-20 15:16:24.834584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.456 ms 00:23:24.185 [2024-11-20 15:16:24.834597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.185 [2024-11-20 15:16:24.834884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.185 [2024-11-20 15:16:24.834907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:24.185 [2024-11-20 15:16:24.834927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:23:24.185 [2024-11-20 15:16:24.834939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.185 [2024-11-20 15:16:24.880825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.185 [2024-11-20 15:16:24.880968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:24.185 [2024-11-20 15:16:24.880990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.911 ms 00:23:24.185 [2024-11-20 15:16:24.881001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.185 [2024-11-20 15:16:24.925958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.185 [2024-11-20 15:16:24.926317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:24.185 [2024-11-20 15:16:24.926365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.885 ms 00:23:24.185 [2024-11-20 15:16:24.926378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.185 [2024-11-20 15:16:24.971138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.185 [2024-11-20 15:16:24.971244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:24.185 [2024-11-20 15:16:24.971265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.661 ms 00:23:24.185 [2024-11-20 15:16:24.971276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.185 [2024-11-20 15:16:25.015941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.185 [2024-11-20 15:16:25.016046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:24.185 [2024-11-20 15:16:25.016068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.521 ms 00:23:24.185 [2024-11-20 15:16:25.016079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.185 [2024-11-20 15:16:25.016261] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:24.185 [2024-11-20 15:16:25.016285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.016300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.016312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.016324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.016337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.016348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.016359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.016371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.016383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.016395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.016406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.016417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.016428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.016438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.016450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.016461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.016473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.016485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.016497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.016508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.016519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.016530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.016541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.016552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.016563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.016574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.016588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.016600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.016612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.016623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.016634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.016648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.016660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.016672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.016684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.016695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.016707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.016739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.016762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.016782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.016801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.016822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.016836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.016849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.016871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.016882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.016894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.016905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.016917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.016928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.016940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.016951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.016963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.016973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.016984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.016994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.017005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.017016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.017026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.017037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.017049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.017060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.017071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.017083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:24.185 [2024-11-20 15:16:25.017094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:24.186 [2024-11-20 15:16:25.017106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:24.186 [2024-11-20 15:16:25.017117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:24.186 [2024-11-20 15:16:25.017127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:24.186 [2024-11-20 15:16:25.017138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:24.186 [2024-11-20 15:16:25.017148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:24.186 [2024-11-20 15:16:25.017159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:24.186 [2024-11-20 15:16:25.017171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:24.186 [2024-11-20 15:16:25.017183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:24.186 [2024-11-20 15:16:25.017194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:24.186 [2024-11-20 15:16:25.017205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:24.186 [2024-11-20 15:16:25.017216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:24.186 [2024-11-20 15:16:25.017228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:24.186 [2024-11-20 15:16:25.017239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:24.186 [2024-11-20 15:16:25.017250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:24.186 [2024-11-20 15:16:25.017260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:24.186 [2024-11-20 15:16:25.017272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:24.186 [2024-11-20 15:16:25.017283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:24.186 [2024-11-20 15:16:25.017294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:24.186 [2024-11-20 15:16:25.017305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:24.186 [2024-11-20 15:16:25.017317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:24.186 [2024-11-20 15:16:25.017328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:24.186 [2024-11-20 15:16:25.017339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:24.186 [2024-11-20 15:16:25.017352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:24.186 [2024-11-20 15:16:25.017363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:24.186 [2024-11-20 15:16:25.017375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:24.186 [2024-11-20 15:16:25.017387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:24.186 [2024-11-20 15:16:25.017398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:24.186 [2024-11-20 15:16:25.017409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:24.186 [2024-11-20 15:16:25.017420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:24.186 [2024-11-20 15:16:25.017432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:24.186 [2024-11-20 15:16:25.017477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:24.186 [2024-11-20 15:16:25.017489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:24.186 [2024-11-20 15:16:25.017501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:24.186 [2024-11-20 15:16:25.017513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:24.186 [2024-11-20 15:16:25.017524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:24.186 [2024-11-20 15:16:25.017545] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:24.186 [2024-11-20 15:16:25.017556] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9ea78f07-8cfa-4fd3-80ff-f63c8fb0b0f1 00:23:24.186 [2024-11-20 15:16:25.017569] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:24.186 [2024-11-20 15:16:25.017579] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:24.186 [2024-11-20 15:16:25.017600] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:24.186 [2024-11-20 15:16:25.017612] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:24.186 [2024-11-20 15:16:25.017624] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:24.186 [2024-11-20 15:16:25.017635] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:24.444 [2024-11-20 15:16:25.017646] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:24.444 [2024-11-20 15:16:25.017655] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:24.444 [2024-11-20 15:16:25.017665] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:24.444 [2024-11-20 15:16:25.017677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.444 [2024-11-20 15:16:25.017701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:24.444 [2024-11-20 15:16:25.017713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.421 ms 00:23:24.444 [2024-11-20 15:16:25.017735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.445 [2024-11-20 15:16:25.040082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.445 [2024-11-20 15:16:25.040179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:24.445 [2024-11-20 15:16:25.040198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.333 ms 00:23:24.445 [2024-11-20 15:16:25.040211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.445 [2024-11-20 15:16:25.041000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.445 [2024-11-20 15:16:25.041024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:24.445 [2024-11-20 15:16:25.041037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.688 ms 00:23:24.445 [2024-11-20 15:16:25.041048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.445 [2024-11-20 15:16:25.103150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:24.445 [2024-11-20 15:16:25.103248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:24.445 [2024-11-20 15:16:25.103268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:24.445 [2024-11-20 15:16:25.103280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.445 [2024-11-20 15:16:25.103441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:24.445 [2024-11-20 15:16:25.103454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:24.445 [2024-11-20 15:16:25.103466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:24.445 [2024-11-20 15:16:25.103480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.445 [2024-11-20 15:16:25.103566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:24.445 [2024-11-20 15:16:25.103581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:24.445 [2024-11-20 15:16:25.103592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:24.445 [2024-11-20 15:16:25.103604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.445 [2024-11-20 15:16:25.103625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:24.445 [2024-11-20 15:16:25.103643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:24.445 [2024-11-20 15:16:25.103655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:24.445 [2024-11-20 15:16:25.103666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.445 [2024-11-20 15:16:25.240832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:24.445 [2024-11-20 15:16:25.241176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:24.445 [2024-11-20 15:16:25.241208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:24.445 [2024-11-20 15:16:25.241220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.702 [2024-11-20 15:16:25.354908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:24.702 [2024-11-20 15:16:25.355267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:24.703 [2024-11-20 15:16:25.355298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:24.703 [2024-11-20 15:16:25.355312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.703 [2024-11-20 15:16:25.355458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:24.703 [2024-11-20 15:16:25.355472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:24.703 [2024-11-20 15:16:25.355484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:24.703 [2024-11-20 15:16:25.355497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.703 [2024-11-20 15:16:25.355535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:24.703 [2024-11-20 15:16:25.355548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:24.703 [2024-11-20 15:16:25.355574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:24.703 [2024-11-20 15:16:25.355585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.703 [2024-11-20 15:16:25.355757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:24.703 [2024-11-20 15:16:25.355783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:24.703 [2024-11-20 15:16:25.355802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:24.703 [2024-11-20 15:16:25.355821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.703 [2024-11-20 15:16:25.355880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:24.703 [2024-11-20 15:16:25.355898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:24.703 [2024-11-20 15:16:25.355911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:24.703 [2024-11-20 15:16:25.355929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.703 [2024-11-20 15:16:25.355992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:24.703 [2024-11-20 15:16:25.356005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:24.703 [2024-11-20 15:16:25.356016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:24.703 [2024-11-20 15:16:25.356027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.703 [2024-11-20 15:16:25.356081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:24.703 [2024-11-20 15:16:25.356099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:24.703 [2024-11-20 15:16:25.356116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:24.703 [2024-11-20 15:16:25.356127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.703 [2024-11-20 15:16:25.356310] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 610.799 ms, result 0 00:23:25.711 00:23:25.711 00:23:25.985 15:16:26 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:23:25.985 15:16:26 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:23:26.243 15:16:27 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:26.500 [2024-11-20 15:16:27.125582] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:23:26.500 [2024-11-20 15:16:27.125901] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79018 ] 00:23:26.500 [2024-11-20 15:16:27.329089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.759 [2024-11-20 15:16:27.477213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:27.325 [2024-11-20 15:16:27.911242] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:27.325 [2024-11-20 15:16:27.911351] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:27.325 [2024-11-20 15:16:28.088343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.325 [2024-11-20 15:16:28.088453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:27.325 [2024-11-20 15:16:28.088484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:27.325 [2024-11-20 15:16:28.088500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.325 [2024-11-20 15:16:28.092367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.325 [2024-11-20 15:16:28.092423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:27.325 [2024-11-20 15:16:28.092439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.836 ms 00:23:27.325 [2024-11-20 15:16:28.092450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.325 [2024-11-20 15:16:28.092608] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:27.325 [2024-11-20 15:16:28.093781] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:27.325 [2024-11-20 15:16:28.093818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.325 [2024-11-20 15:16:28.093831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:27.325 [2024-11-20 15:16:28.093844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.227 ms 00:23:27.325 [2024-11-20 15:16:28.093855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.325 [2024-11-20 15:16:28.096379] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:27.325 [2024-11-20 15:16:28.118805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.325 [2024-11-20 15:16:28.118916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:27.325 [2024-11-20 15:16:28.118938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.455 ms 00:23:27.325 [2024-11-20 15:16:28.118949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.325 [2024-11-20 15:16:28.119192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.325 [2024-11-20 15:16:28.119211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:27.325 [2024-11-20 15:16:28.119223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:23:27.325 [2024-11-20 15:16:28.119234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.326 [2024-11-20 15:16:28.133209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.326 [2024-11-20 15:16:28.133539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:27.326 [2024-11-20 15:16:28.133572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.938 ms 00:23:27.326 [2024-11-20 15:16:28.133592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.326 [2024-11-20 15:16:28.133853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.326 [2024-11-20 15:16:28.133871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:27.326 [2024-11-20 15:16:28.133884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:23:27.326 [2024-11-20 15:16:28.133896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.326 [2024-11-20 15:16:28.133936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.326 [2024-11-20 15:16:28.133953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:27.326 [2024-11-20 15:16:28.133965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:27.326 [2024-11-20 15:16:28.133977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.326 [2024-11-20 15:16:28.134009] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:27.326 [2024-11-20 15:16:28.140162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.326 [2024-11-20 15:16:28.140395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:27.326 [2024-11-20 15:16:28.140421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.174 ms 00:23:27.326 [2024-11-20 15:16:28.140433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.326 [2024-11-20 15:16:28.140541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.326 [2024-11-20 15:16:28.140555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:27.326 [2024-11-20 15:16:28.140567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:23:27.326 [2024-11-20 15:16:28.140578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.326 [2024-11-20 15:16:28.140605] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:27.326 [2024-11-20 15:16:28.140638] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:27.326 [2024-11-20 15:16:28.140679] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:27.326 [2024-11-20 15:16:28.140700] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:27.326 [2024-11-20 15:16:28.140833] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:27.326 [2024-11-20 15:16:28.140850] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:27.326 [2024-11-20 15:16:28.140866] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:27.326 [2024-11-20 15:16:28.140882] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:27.326 [2024-11-20 15:16:28.140900] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:27.326 [2024-11-20 15:16:28.140913] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:27.326 [2024-11-20 15:16:28.140925] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:27.326 [2024-11-20 15:16:28.140937] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:27.326 [2024-11-20 15:16:28.140948] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:27.326 [2024-11-20 15:16:28.140961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.326 [2024-11-20 15:16:28.140973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:27.326 [2024-11-20 15:16:28.140987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.361 ms 00:23:27.326 [2024-11-20 15:16:28.140998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.326 [2024-11-20 15:16:28.141082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.326 [2024-11-20 15:16:28.141100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:27.326 [2024-11-20 15:16:28.141112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:23:27.326 [2024-11-20 15:16:28.141123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.326 [2024-11-20 15:16:28.141226] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:27.326 [2024-11-20 15:16:28.141253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:27.326 [2024-11-20 15:16:28.141264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:27.326 [2024-11-20 15:16:28.141275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:27.326 [2024-11-20 15:16:28.141286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:27.326 [2024-11-20 15:16:28.141296] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:27.326 [2024-11-20 15:16:28.141306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:27.326 [2024-11-20 15:16:28.141315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:27.326 [2024-11-20 15:16:28.141325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:27.326 [2024-11-20 15:16:28.141334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:27.326 [2024-11-20 15:16:28.141343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:27.326 [2024-11-20 15:16:28.141352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:27.326 [2024-11-20 15:16:28.141363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:27.326 [2024-11-20 15:16:28.141387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:27.326 [2024-11-20 15:16:28.141398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:27.326 [2024-11-20 15:16:28.141407] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:27.326 [2024-11-20 15:16:28.141416] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:27.326 [2024-11-20 15:16:28.141426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:27.326 [2024-11-20 15:16:28.141435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:27.326 [2024-11-20 15:16:28.141445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:27.326 [2024-11-20 15:16:28.141455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:27.326 [2024-11-20 15:16:28.141464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:27.326 [2024-11-20 15:16:28.141473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:27.326 [2024-11-20 15:16:28.141483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:27.326 [2024-11-20 15:16:28.141492] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:27.326 [2024-11-20 15:16:28.141501] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:27.326 [2024-11-20 15:16:28.141511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:27.326 [2024-11-20 15:16:28.141520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:27.326 [2024-11-20 15:16:28.141530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:27.326 [2024-11-20 15:16:28.141539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:27.326 [2024-11-20 15:16:28.141549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:27.326 [2024-11-20 15:16:28.141563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:27.326 [2024-11-20 15:16:28.141572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:27.326 [2024-11-20 15:16:28.141582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:27.326 [2024-11-20 15:16:28.141601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:27.326 [2024-11-20 15:16:28.141610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:27.326 [2024-11-20 15:16:28.141620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:27.326 [2024-11-20 15:16:28.141630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:27.326 [2024-11-20 15:16:28.141639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:27.326 [2024-11-20 15:16:28.141648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:27.326 [2024-11-20 15:16:28.141658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:27.326 [2024-11-20 15:16:28.141684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:27.326 [2024-11-20 15:16:28.141694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:27.326 [2024-11-20 15:16:28.141705] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:27.326 [2024-11-20 15:16:28.141719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:27.326 [2024-11-20 15:16:28.141730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:27.326 [2024-11-20 15:16:28.141757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:27.326 [2024-11-20 15:16:28.141769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:27.326 [2024-11-20 15:16:28.141779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:27.326 [2024-11-20 15:16:28.141789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:27.326 [2024-11-20 15:16:28.141800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:27.326 [2024-11-20 15:16:28.141809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:27.326 [2024-11-20 15:16:28.141819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:27.326 [2024-11-20 15:16:28.141831] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:27.326 [2024-11-20 15:16:28.141844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:27.326 [2024-11-20 15:16:28.141857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:27.326 [2024-11-20 15:16:28.141868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:27.326 [2024-11-20 15:16:28.141879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:27.326 [2024-11-20 15:16:28.141891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:27.326 [2024-11-20 15:16:28.141902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:27.327 [2024-11-20 15:16:28.141914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:27.327 [2024-11-20 15:16:28.141925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:27.327 [2024-11-20 15:16:28.141936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:27.327 [2024-11-20 15:16:28.141948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:27.327 [2024-11-20 15:16:28.141959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:27.327 [2024-11-20 15:16:28.141970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:27.327 [2024-11-20 15:16:28.141981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:27.327 [2024-11-20 15:16:28.141991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:27.327 [2024-11-20 15:16:28.142002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:27.327 [2024-11-20 15:16:28.142013] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:27.327 [2024-11-20 15:16:28.142025] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:27.327 [2024-11-20 15:16:28.142036] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:27.327 [2024-11-20 15:16:28.142047] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:27.327 [2024-11-20 15:16:28.142058] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:27.327 [2024-11-20 15:16:28.142069] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:27.327 [2024-11-20 15:16:28.142080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.327 [2024-11-20 15:16:28.142093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:27.327 [2024-11-20 15:16:28.142111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.913 ms 00:23:27.327 [2024-11-20 15:16:28.142122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.586 [2024-11-20 15:16:28.191812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.586 [2024-11-20 15:16:28.191893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:27.586 [2024-11-20 15:16:28.191912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.696 ms 00:23:27.586 [2024-11-20 15:16:28.191925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.586 [2024-11-20 15:16:28.192177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.586 [2024-11-20 15:16:28.192192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:27.586 [2024-11-20 15:16:28.192205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:23:27.586 [2024-11-20 15:16:28.192216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.586 [2024-11-20 15:16:28.267117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.586 [2024-11-20 15:16:28.267178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:27.586 [2024-11-20 15:16:28.267202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.992 ms 00:23:27.586 [2024-11-20 15:16:28.267214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.586 [2024-11-20 15:16:28.267375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.586 [2024-11-20 15:16:28.267390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:27.586 [2024-11-20 15:16:28.267402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:27.586 [2024-11-20 15:16:28.267412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.586 [2024-11-20 15:16:28.268210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.586 [2024-11-20 15:16:28.268226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:27.586 [2024-11-20 15:16:28.268239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.775 ms 00:23:27.586 [2024-11-20 15:16:28.268256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.586 [2024-11-20 15:16:28.268404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.586 [2024-11-20 15:16:28.268420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:27.586 [2024-11-20 15:16:28.268432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:23:27.586 [2024-11-20 15:16:28.268443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.586 [2024-11-20 15:16:28.293460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.586 [2024-11-20 15:16:28.293540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:27.586 [2024-11-20 15:16:28.293560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.027 ms 00:23:27.586 [2024-11-20 15:16:28.293571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.586 [2024-11-20 15:16:28.315665] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:23:27.586 [2024-11-20 15:16:28.315775] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:27.586 [2024-11-20 15:16:28.315797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.586 [2024-11-20 15:16:28.315811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:27.586 [2024-11-20 15:16:28.315827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.008 ms 00:23:27.586 [2024-11-20 15:16:28.315840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.586 [2024-11-20 15:16:28.348745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.586 [2024-11-20 15:16:28.348874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:27.586 [2024-11-20 15:16:28.348896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.778 ms 00:23:27.586 [2024-11-20 15:16:28.348907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.586 [2024-11-20 15:16:28.370814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.586 [2024-11-20 15:16:28.370921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:27.586 [2024-11-20 15:16:28.370941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.732 ms 00:23:27.586 [2024-11-20 15:16:28.370953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.586 [2024-11-20 15:16:28.393162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.586 [2024-11-20 15:16:28.393257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:27.586 [2024-11-20 15:16:28.393277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.043 ms 00:23:27.586 [2024-11-20 15:16:28.393289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.586 [2024-11-20 15:16:28.394266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.586 [2024-11-20 15:16:28.394300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:27.586 [2024-11-20 15:16:28.394315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.715 ms 00:23:27.586 [2024-11-20 15:16:28.394327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.845 [2024-11-20 15:16:28.498492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.845 [2024-11-20 15:16:28.498595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:27.845 [2024-11-20 15:16:28.498616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 104.292 ms 00:23:27.845 [2024-11-20 15:16:28.498629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.845 [2024-11-20 15:16:28.516455] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:27.845 [2024-11-20 15:16:28.544164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.845 [2024-11-20 15:16:28.544256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:27.845 [2024-11-20 15:16:28.544276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.349 ms 00:23:27.845 [2024-11-20 15:16:28.544296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.845 [2024-11-20 15:16:28.544472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.845 [2024-11-20 15:16:28.544488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:27.845 [2024-11-20 15:16:28.544501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:27.845 [2024-11-20 15:16:28.544512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.845 [2024-11-20 15:16:28.544584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.845 [2024-11-20 15:16:28.544597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:27.845 [2024-11-20 15:16:28.544609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:23:27.845 [2024-11-20 15:16:28.544620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.845 [2024-11-20 15:16:28.544671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.845 [2024-11-20 15:16:28.544686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:27.845 [2024-11-20 15:16:28.544697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:23:27.845 [2024-11-20 15:16:28.544708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.845 [2024-11-20 15:16:28.544776] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:27.845 [2024-11-20 15:16:28.544791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.845 [2024-11-20 15:16:28.544802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:27.845 [2024-11-20 15:16:28.544813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:23:27.845 [2024-11-20 15:16:28.544824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.845 [2024-11-20 15:16:28.589598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.845 [2024-11-20 15:16:28.589742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:27.845 [2024-11-20 15:16:28.589781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.809 ms 00:23:27.845 [2024-11-20 15:16:28.589795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.845 [2024-11-20 15:16:28.590058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.845 [2024-11-20 15:16:28.590078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:27.845 [2024-11-20 15:16:28.590091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:23:27.845 [2024-11-20 15:16:28.590102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.845 [2024-11-20 15:16:28.591574] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:27.845 [2024-11-20 15:16:28.597777] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 503.697 ms, result 0 00:23:27.845 [2024-11-20 15:16:28.598911] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:27.845 [2024-11-20 15:16:28.619414] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:28.104  [2024-11-20T15:16:28.940Z] Copying: 4096/4096 [kB] (average 26 MBps)[2024-11-20 15:16:28.778927] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:28.104 [2024-11-20 15:16:28.795593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.104 [2024-11-20 15:16:28.795663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:28.104 [2024-11-20 15:16:28.795682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:28.104 [2024-11-20 15:16:28.795705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.105 [2024-11-20 15:16:28.795742] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:28.105 [2024-11-20 15:16:28.800530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.105 [2024-11-20 15:16:28.800564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:28.105 [2024-11-20 15:16:28.800579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.775 ms 00:23:28.105 [2024-11-20 15:16:28.800590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.105 [2024-11-20 15:16:28.802864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.105 [2024-11-20 15:16:28.802907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:28.105 [2024-11-20 15:16:28.802920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.211 ms 00:23:28.105 [2024-11-20 15:16:28.802932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.105 [2024-11-20 15:16:28.806225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.105 [2024-11-20 15:16:28.806266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:28.105 [2024-11-20 15:16:28.806279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.277 ms 00:23:28.105 [2024-11-20 15:16:28.806292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.105 [2024-11-20 15:16:28.811921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.105 [2024-11-20 15:16:28.811961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:28.105 [2024-11-20 15:16:28.811975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.603 ms 00:23:28.105 [2024-11-20 15:16:28.811986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.105 [2024-11-20 15:16:28.854434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.105 [2024-11-20 15:16:28.854528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:28.105 [2024-11-20 15:16:28.854548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.445 ms 00:23:28.105 [2024-11-20 15:16:28.854559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.105 [2024-11-20 15:16:28.878741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.105 [2024-11-20 15:16:28.878842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:28.105 [2024-11-20 15:16:28.878869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.089 ms 00:23:28.105 [2024-11-20 15:16:28.878881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.105 [2024-11-20 15:16:28.879099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.105 [2024-11-20 15:16:28.879114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:28.105 [2024-11-20 15:16:28.879127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:23:28.105 [2024-11-20 15:16:28.879137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.105 [2024-11-20 15:16:28.923326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.105 [2024-11-20 15:16:28.923421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:28.105 [2024-11-20 15:16:28.923443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.218 ms 00:23:28.105 [2024-11-20 15:16:28.923455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.365 [2024-11-20 15:16:28.967325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.365 [2024-11-20 15:16:28.967418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:28.365 [2024-11-20 15:16:28.967438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.787 ms 00:23:28.365 [2024-11-20 15:16:28.967450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.365 [2024-11-20 15:16:29.009976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.365 [2024-11-20 15:16:29.010069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:28.365 [2024-11-20 15:16:29.010089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.467 ms 00:23:28.365 [2024-11-20 15:16:29.010101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.365 [2024-11-20 15:16:29.053228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.365 [2024-11-20 15:16:29.053325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:28.365 [2024-11-20 15:16:29.053344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.908 ms 00:23:28.365 [2024-11-20 15:16:29.053355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.365 [2024-11-20 15:16:29.053522] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:28.365 [2024-11-20 15:16:29.053546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.053560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.053574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.053595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.053608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.053620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.053631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.053643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.053672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.053684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.053696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.053707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.053720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.053731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.053762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.053774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.053786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.053797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.053809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.053821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.053834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.053845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.053857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.053869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.053881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.053892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.053904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.053916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.053928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.053943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.053955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.053967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.053979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.053991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.054003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.054014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.054026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.054038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.054049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.054061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.054072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.054084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.054096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.054108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.054120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.054131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.054142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.054153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.054165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.054176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.054187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.054198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.054210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.054221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.054232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.054243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.054254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.054265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.054277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.054288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.054299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.054312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.054323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.054334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.054345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.054356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:28.365 [2024-11-20 15:16:29.054368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:28.366 [2024-11-20 15:16:29.054380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:28.366 [2024-11-20 15:16:29.054391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:28.366 [2024-11-20 15:16:29.054403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:28.366 [2024-11-20 15:16:29.054415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:28.366 [2024-11-20 15:16:29.054426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:28.366 [2024-11-20 15:16:29.054438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:28.366 [2024-11-20 15:16:29.054450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:28.366 [2024-11-20 15:16:29.054462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:28.366 [2024-11-20 15:16:29.054473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:28.366 [2024-11-20 15:16:29.054485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:28.366 [2024-11-20 15:16:29.054497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:28.366 [2024-11-20 15:16:29.054508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:28.366 [2024-11-20 15:16:29.054519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:28.366 [2024-11-20 15:16:29.054530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:28.366 [2024-11-20 15:16:29.054541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:28.366 [2024-11-20 15:16:29.054552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:28.366 [2024-11-20 15:16:29.054563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:28.366 [2024-11-20 15:16:29.054574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:28.366 [2024-11-20 15:16:29.054585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:28.366 [2024-11-20 15:16:29.054596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:28.366 [2024-11-20 15:16:29.054607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:28.366 [2024-11-20 15:16:29.054619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:28.366 [2024-11-20 15:16:29.054630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:28.366 [2024-11-20 15:16:29.054641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:28.366 [2024-11-20 15:16:29.054652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:28.366 [2024-11-20 15:16:29.054665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:28.366 [2024-11-20 15:16:29.054678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:28.366 [2024-11-20 15:16:29.054691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:28.366 [2024-11-20 15:16:29.054734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:28.366 [2024-11-20 15:16:29.054747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:28.366 [2024-11-20 15:16:29.054759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:28.366 [2024-11-20 15:16:29.054771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:28.366 [2024-11-20 15:16:29.054783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:28.366 [2024-11-20 15:16:29.054803] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:28.366 [2024-11-20 15:16:29.054815] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9ea78f07-8cfa-4fd3-80ff-f63c8fb0b0f1 00:23:28.366 [2024-11-20 15:16:29.054828] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:28.366 [2024-11-20 15:16:29.054839] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:28.366 [2024-11-20 15:16:29.054850] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:28.366 [2024-11-20 15:16:29.054862] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:28.366 [2024-11-20 15:16:29.054873] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:28.366 [2024-11-20 15:16:29.054885] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:28.366 [2024-11-20 15:16:29.054896] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:28.366 [2024-11-20 15:16:29.054907] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:28.366 [2024-11-20 15:16:29.054917] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:28.366 [2024-11-20 15:16:29.054940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.366 [2024-11-20 15:16:29.054959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:28.366 [2024-11-20 15:16:29.054971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.424 ms 00:23:28.366 [2024-11-20 15:16:29.054982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.366 [2024-11-20 15:16:29.077539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.366 [2024-11-20 15:16:29.077639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:28.366 [2024-11-20 15:16:29.077660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.557 ms 00:23:28.366 [2024-11-20 15:16:29.077673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.366 [2024-11-20 15:16:29.078498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.366 [2024-11-20 15:16:29.078519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:28.366 [2024-11-20 15:16:29.078532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.674 ms 00:23:28.366 [2024-11-20 15:16:29.078544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.366 [2024-11-20 15:16:29.138491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:28.366 [2024-11-20 15:16:29.138583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:28.366 [2024-11-20 15:16:29.138602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:28.366 [2024-11-20 15:16:29.138614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.366 [2024-11-20 15:16:29.138812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:28.366 [2024-11-20 15:16:29.138827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:28.366 [2024-11-20 15:16:29.138839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:28.366 [2024-11-20 15:16:29.138850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.366 [2024-11-20 15:16:29.138920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:28.366 [2024-11-20 15:16:29.138934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:28.366 [2024-11-20 15:16:29.138946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:28.366 [2024-11-20 15:16:29.138957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.366 [2024-11-20 15:16:29.138980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:28.366 [2024-11-20 15:16:29.138997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:28.366 [2024-11-20 15:16:29.139008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:28.366 [2024-11-20 15:16:29.139018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.625 [2024-11-20 15:16:29.274613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:28.625 [2024-11-20 15:16:29.274688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:28.625 [2024-11-20 15:16:29.274706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:28.625 [2024-11-20 15:16:29.274725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.625 [2024-11-20 15:16:29.387859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:28.625 [2024-11-20 15:16:29.387933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:28.625 [2024-11-20 15:16:29.387951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:28.625 [2024-11-20 15:16:29.387962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.625 [2024-11-20 15:16:29.388099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:28.625 [2024-11-20 15:16:29.388113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:28.625 [2024-11-20 15:16:29.388124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:28.625 [2024-11-20 15:16:29.388136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.625 [2024-11-20 15:16:29.388171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:28.625 [2024-11-20 15:16:29.388183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:28.625 [2024-11-20 15:16:29.388199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:28.625 [2024-11-20 15:16:29.388210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.625 [2024-11-20 15:16:29.388354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:28.625 [2024-11-20 15:16:29.388372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:28.625 [2024-11-20 15:16:29.388384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:28.625 [2024-11-20 15:16:29.388394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.625 [2024-11-20 15:16:29.388436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:28.625 [2024-11-20 15:16:29.388449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:28.625 [2024-11-20 15:16:29.388465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:28.625 [2024-11-20 15:16:29.388476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.625 [2024-11-20 15:16:29.388524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:28.625 [2024-11-20 15:16:29.388536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:28.625 [2024-11-20 15:16:29.388547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:28.625 [2024-11-20 15:16:29.388558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.625 [2024-11-20 15:16:29.388610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:28.625 [2024-11-20 15:16:29.388622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:28.625 [2024-11-20 15:16:29.388638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:28.625 [2024-11-20 15:16:29.388649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.625 [2024-11-20 15:16:29.388845] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 594.217 ms, result 0 00:23:30.001 00:23:30.001 00:23:30.001 15:16:30 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=79059 00:23:30.001 15:16:30 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:23:30.001 15:16:30 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 79059 00:23:30.001 15:16:30 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 79059 ']' 00:23:30.001 15:16:30 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:30.001 15:16:30 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:30.001 15:16:30 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:30.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:30.001 15:16:30 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:30.001 15:16:30 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:23:30.001 [2024-11-20 15:16:30.682091] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:23:30.001 [2024-11-20 15:16:30.682256] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79059 ] 00:23:30.259 [2024-11-20 15:16:30.869907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.259 [2024-11-20 15:16:31.012291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:31.633 15:16:32 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:31.633 15:16:32 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:23:31.633 15:16:32 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:23:31.633 [2024-11-20 15:16:32.284010] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:31.633 [2024-11-20 15:16:32.284095] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:31.893 [2024-11-20 15:16:32.477949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.893 [2024-11-20 15:16:32.478024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:31.893 [2024-11-20 15:16:32.478047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:31.893 [2024-11-20 15:16:32.478061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.893 [2024-11-20 15:16:32.482257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.893 [2024-11-20 15:16:32.482302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:31.893 [2024-11-20 15:16:32.482318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.179 ms 00:23:31.893 [2024-11-20 15:16:32.482330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.893 [2024-11-20 15:16:32.482446] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:31.893 [2024-11-20 15:16:32.483444] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:31.893 [2024-11-20 15:16:32.483481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.893 [2024-11-20 15:16:32.483492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:31.893 [2024-11-20 15:16:32.483506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.050 ms 00:23:31.893 [2024-11-20 15:16:32.483517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.893 [2024-11-20 15:16:32.485996] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:31.893 [2024-11-20 15:16:32.506486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.893 [2024-11-20 15:16:32.506544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:31.893 [2024-11-20 15:16:32.506578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.529 ms 00:23:31.893 [2024-11-20 15:16:32.506593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.893 [2024-11-20 15:16:32.506758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.893 [2024-11-20 15:16:32.506777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:31.893 [2024-11-20 15:16:32.506791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:23:31.893 [2024-11-20 15:16:32.506804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.893 [2024-11-20 15:16:32.518987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.893 [2024-11-20 15:16:32.519045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:31.893 [2024-11-20 15:16:32.519060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.138 ms 00:23:31.893 [2024-11-20 15:16:32.519075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.893 [2024-11-20 15:16:32.519247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.893 [2024-11-20 15:16:32.519267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:31.893 [2024-11-20 15:16:32.519278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:23:31.893 [2024-11-20 15:16:32.519297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.893 [2024-11-20 15:16:32.519331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.893 [2024-11-20 15:16:32.519346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:31.893 [2024-11-20 15:16:32.519357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:31.893 [2024-11-20 15:16:32.519370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.893 [2024-11-20 15:16:32.519402] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:31.893 [2024-11-20 15:16:32.525044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.893 [2024-11-20 15:16:32.525079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:31.893 [2024-11-20 15:16:32.525095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.657 ms 00:23:31.893 [2024-11-20 15:16:32.525106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.893 [2024-11-20 15:16:32.525173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.893 [2024-11-20 15:16:32.525185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:31.893 [2024-11-20 15:16:32.525200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:31.893 [2024-11-20 15:16:32.525214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.893 [2024-11-20 15:16:32.525243] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:31.893 [2024-11-20 15:16:32.525269] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:31.893 [2024-11-20 15:16:32.525321] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:31.893 [2024-11-20 15:16:32.525343] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:31.893 [2024-11-20 15:16:32.525441] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:31.893 [2024-11-20 15:16:32.525454] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:31.893 [2024-11-20 15:16:32.525478] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:31.893 [2024-11-20 15:16:32.525492] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:31.893 [2024-11-20 15:16:32.525507] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:31.893 [2024-11-20 15:16:32.525519] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:31.893 [2024-11-20 15:16:32.525533] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:31.893 [2024-11-20 15:16:32.525543] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:31.893 [2024-11-20 15:16:32.525559] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:31.893 [2024-11-20 15:16:32.525570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.893 [2024-11-20 15:16:32.525583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:31.893 [2024-11-20 15:16:32.525605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.334 ms 00:23:31.893 [2024-11-20 15:16:32.525618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.893 [2024-11-20 15:16:32.525699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.893 [2024-11-20 15:16:32.525715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:31.893 [2024-11-20 15:16:32.525758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:23:31.893 [2024-11-20 15:16:32.525772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.893 [2024-11-20 15:16:32.525867] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:31.893 [2024-11-20 15:16:32.525884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:31.893 [2024-11-20 15:16:32.525895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:31.893 [2024-11-20 15:16:32.525909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:31.893 [2024-11-20 15:16:32.525919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:31.893 [2024-11-20 15:16:32.525932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:31.893 [2024-11-20 15:16:32.525941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:31.893 [2024-11-20 15:16:32.525957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:31.893 [2024-11-20 15:16:32.525967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:31.894 [2024-11-20 15:16:32.525981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:31.894 [2024-11-20 15:16:32.525990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:31.894 [2024-11-20 15:16:32.526003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:31.894 [2024-11-20 15:16:32.526012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:31.894 [2024-11-20 15:16:32.526024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:31.894 [2024-11-20 15:16:32.526034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:31.894 [2024-11-20 15:16:32.526046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:31.894 [2024-11-20 15:16:32.526055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:31.894 [2024-11-20 15:16:32.526068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:31.894 [2024-11-20 15:16:32.526077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:31.894 [2024-11-20 15:16:32.526089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:31.894 [2024-11-20 15:16:32.526110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:31.894 [2024-11-20 15:16:32.526123] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:31.894 [2024-11-20 15:16:32.526132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:31.894 [2024-11-20 15:16:32.526148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:31.894 [2024-11-20 15:16:32.526157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:31.894 [2024-11-20 15:16:32.526168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:31.894 [2024-11-20 15:16:32.526178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:31.894 [2024-11-20 15:16:32.526190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:31.894 [2024-11-20 15:16:32.526198] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:31.894 [2024-11-20 15:16:32.526210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:31.894 [2024-11-20 15:16:32.526219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:31.894 [2024-11-20 15:16:32.526231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:31.894 [2024-11-20 15:16:32.526240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:31.894 [2024-11-20 15:16:32.526252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:31.894 [2024-11-20 15:16:32.526261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:31.894 [2024-11-20 15:16:32.526275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:31.894 [2024-11-20 15:16:32.526284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:31.894 [2024-11-20 15:16:32.526296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:31.894 [2024-11-20 15:16:32.526305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:31.894 [2024-11-20 15:16:32.526320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:31.894 [2024-11-20 15:16:32.526329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:31.894 [2024-11-20 15:16:32.526341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:31.894 [2024-11-20 15:16:32.526350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:31.894 [2024-11-20 15:16:32.526362] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:31.894 [2024-11-20 15:16:32.526375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:31.894 [2024-11-20 15:16:32.526388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:31.894 [2024-11-20 15:16:32.526398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:31.894 [2024-11-20 15:16:32.526411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:31.894 [2024-11-20 15:16:32.526420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:31.894 [2024-11-20 15:16:32.526433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:31.894 [2024-11-20 15:16:32.526443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:31.894 [2024-11-20 15:16:32.526455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:31.894 [2024-11-20 15:16:32.526464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:31.894 [2024-11-20 15:16:32.526478] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:31.894 [2024-11-20 15:16:32.526492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:31.894 [2024-11-20 15:16:32.526510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:31.894 [2024-11-20 15:16:32.526521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:31.894 [2024-11-20 15:16:32.526534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:31.894 [2024-11-20 15:16:32.526544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:31.894 [2024-11-20 15:16:32.526559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:31.894 [2024-11-20 15:16:32.526571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:31.894 [2024-11-20 15:16:32.526584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:31.894 [2024-11-20 15:16:32.526595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:31.894 [2024-11-20 15:16:32.526608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:31.894 [2024-11-20 15:16:32.526619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:31.894 [2024-11-20 15:16:32.526632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:31.894 [2024-11-20 15:16:32.526642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:31.894 [2024-11-20 15:16:32.526656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:31.894 [2024-11-20 15:16:32.526666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:31.894 [2024-11-20 15:16:32.526679] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:31.894 [2024-11-20 15:16:32.526691] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:31.894 [2024-11-20 15:16:32.526708] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:31.894 [2024-11-20 15:16:32.526728] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:31.894 [2024-11-20 15:16:32.526742] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:31.894 [2024-11-20 15:16:32.526753] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:31.894 [2024-11-20 15:16:32.526766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.894 [2024-11-20 15:16:32.526777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:31.894 [2024-11-20 15:16:32.526790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.953 ms 00:23:31.894 [2024-11-20 15:16:32.526804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.894 [2024-11-20 15:16:32.577682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.894 [2024-11-20 15:16:32.577765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:31.894 [2024-11-20 15:16:32.577789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.884 ms 00:23:31.894 [2024-11-20 15:16:32.577808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.894 [2024-11-20 15:16:32.578048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.894 [2024-11-20 15:16:32.578063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:31.895 [2024-11-20 15:16:32.578081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:23:31.895 [2024-11-20 15:16:32.578091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.895 [2024-11-20 15:16:32.637273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.895 [2024-11-20 15:16:32.637351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:31.895 [2024-11-20 15:16:32.637374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.236 ms 00:23:31.895 [2024-11-20 15:16:32.637385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.895 [2024-11-20 15:16:32.637537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.895 [2024-11-20 15:16:32.637551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:31.895 [2024-11-20 15:16:32.637568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:31.895 [2024-11-20 15:16:32.637579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.895 [2024-11-20 15:16:32.638336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.895 [2024-11-20 15:16:32.638362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:31.895 [2024-11-20 15:16:32.638387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.720 ms 00:23:31.895 [2024-11-20 15:16:32.638398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.895 [2024-11-20 15:16:32.638551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.895 [2024-11-20 15:16:32.638565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:31.895 [2024-11-20 15:16:32.638582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:23:31.895 [2024-11-20 15:16:32.638598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.895 [2024-11-20 15:16:32.666767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.895 [2024-11-20 15:16:32.666827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:31.895 [2024-11-20 15:16:32.666851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.177 ms 00:23:31.895 [2024-11-20 15:16:32.666862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.895 [2024-11-20 15:16:32.698587] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:31.895 [2024-11-20 15:16:32.698655] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:31.895 [2024-11-20 15:16:32.698681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.895 [2024-11-20 15:16:32.698694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:31.895 [2024-11-20 15:16:32.698715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.680 ms 00:23:31.895 [2024-11-20 15:16:32.698734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.153 [2024-11-20 15:16:32.730423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.153 [2024-11-20 15:16:32.730499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:32.153 [2024-11-20 15:16:32.730525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.603 ms 00:23:32.153 [2024-11-20 15:16:32.730537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.153 [2024-11-20 15:16:32.749418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.153 [2024-11-20 15:16:32.749482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:32.153 [2024-11-20 15:16:32.749509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.781 ms 00:23:32.153 [2024-11-20 15:16:32.749520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.153 [2024-11-20 15:16:32.767544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.153 [2024-11-20 15:16:32.767594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:32.153 [2024-11-20 15:16:32.767615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.942 ms 00:23:32.153 [2024-11-20 15:16:32.767626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.153 [2024-11-20 15:16:32.768500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.153 [2024-11-20 15:16:32.768535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:32.153 [2024-11-20 15:16:32.768554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.722 ms 00:23:32.153 [2024-11-20 15:16:32.768565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.153 [2024-11-20 15:16:32.867322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.153 [2024-11-20 15:16:32.867400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:32.153 [2024-11-20 15:16:32.867426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.873 ms 00:23:32.153 [2024-11-20 15:16:32.867438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.153 [2024-11-20 15:16:32.881398] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:32.153 [2024-11-20 15:16:32.907309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.153 [2024-11-20 15:16:32.907396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:32.153 [2024-11-20 15:16:32.907423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.756 ms 00:23:32.153 [2024-11-20 15:16:32.907439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.153 [2024-11-20 15:16:32.907623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.153 [2024-11-20 15:16:32.907649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:32.153 [2024-11-20 15:16:32.907662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:32.153 [2024-11-20 15:16:32.907679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.153 [2024-11-20 15:16:32.907767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.153 [2024-11-20 15:16:32.907791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:32.153 [2024-11-20 15:16:32.907803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:23:32.153 [2024-11-20 15:16:32.907829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.153 [2024-11-20 15:16:32.907858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.153 [2024-11-20 15:16:32.907876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:32.153 [2024-11-20 15:16:32.907887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:32.153 [2024-11-20 15:16:32.907903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.154 [2024-11-20 15:16:32.907956] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:32.154 [2024-11-20 15:16:32.907984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.154 [2024-11-20 15:16:32.907995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:32.154 [2024-11-20 15:16:32.908020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:23:32.154 [2024-11-20 15:16:32.908031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.154 [2024-11-20 15:16:32.948499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.154 [2024-11-20 15:16:32.948567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:32.154 [2024-11-20 15:16:32.948593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.487 ms 00:23:32.154 [2024-11-20 15:16:32.948605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.154 [2024-11-20 15:16:32.948814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.154 [2024-11-20 15:16:32.948832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:32.154 [2024-11-20 15:16:32.948851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:23:32.154 [2024-11-20 15:16:32.948869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.154 [2024-11-20 15:16:32.950327] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:32.154 [2024-11-20 15:16:32.955268] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 472.698 ms, result 0 00:23:32.154 [2024-11-20 15:16:32.956585] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:32.154 Some configs were skipped because the RPC state that can call them passed over. 00:23:32.413 15:16:33 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:23:32.413 [2024-11-20 15:16:33.212610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.413 [2024-11-20 15:16:33.212729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:23:32.413 [2024-11-20 15:16:33.212750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.554 ms 00:23:32.413 [2024-11-20 15:16:33.212769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.413 [2024-11-20 15:16:33.212819] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.766 ms, result 0 00:23:32.413 true 00:23:32.413 15:16:33 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:23:32.672 [2024-11-20 15:16:33.440348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.672 [2024-11-20 15:16:33.440414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:23:32.672 [2024-11-20 15:16:33.440439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.322 ms 00:23:32.672 [2024-11-20 15:16:33.440452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.672 [2024-11-20 15:16:33.440530] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.512 ms, result 0 00:23:32.672 true 00:23:32.672 15:16:33 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 79059 00:23:32.672 15:16:33 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 79059 ']' 00:23:32.672 15:16:33 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 79059 00:23:32.672 15:16:33 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:23:32.672 15:16:33 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:32.672 15:16:33 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79059 00:23:32.948 killing process with pid 79059 00:23:32.948 15:16:33 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:32.948 15:16:33 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:32.948 15:16:33 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79059' 00:23:32.948 15:16:33 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 79059 00:23:32.948 15:16:33 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 79059 00:23:33.916 [2024-11-20 15:16:34.746284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.916 [2024-11-20 15:16:34.746389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:33.916 [2024-11-20 15:16:34.746409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:33.916 [2024-11-20 15:16:34.746425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.916 [2024-11-20 15:16:34.746455] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:34.176 [2024-11-20 15:16:34.750943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.176 [2024-11-20 15:16:34.750995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:34.176 [2024-11-20 15:16:34.751026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.470 ms 00:23:34.176 [2024-11-20 15:16:34.751038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.176 [2024-11-20 15:16:34.751345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.176 [2024-11-20 15:16:34.751371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:34.176 [2024-11-20 15:16:34.751386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.226 ms 00:23:34.176 [2024-11-20 15:16:34.751398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.176 [2024-11-20 15:16:34.754802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.176 [2024-11-20 15:16:34.754842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:34.176 [2024-11-20 15:16:34.754860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.384 ms 00:23:34.176 [2024-11-20 15:16:34.754870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.176 [2024-11-20 15:16:34.760524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.176 [2024-11-20 15:16:34.760576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:34.176 [2024-11-20 15:16:34.760594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.621 ms 00:23:34.176 [2024-11-20 15:16:34.760604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.176 [2024-11-20 15:16:34.776073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.176 [2024-11-20 15:16:34.776116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:34.176 [2024-11-20 15:16:34.776140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.417 ms 00:23:34.176 [2024-11-20 15:16:34.776167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.176 [2024-11-20 15:16:34.786635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.176 [2024-11-20 15:16:34.786682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:34.176 [2024-11-20 15:16:34.786700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.413 ms 00:23:34.176 [2024-11-20 15:16:34.786711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.176 [2024-11-20 15:16:34.786865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.176 [2024-11-20 15:16:34.786879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:34.176 [2024-11-20 15:16:34.786894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:23:34.176 [2024-11-20 15:16:34.786904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.176 [2024-11-20 15:16:34.803449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.176 [2024-11-20 15:16:34.803543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:34.176 [2024-11-20 15:16:34.803567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.526 ms 00:23:34.176 [2024-11-20 15:16:34.803578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.176 [2024-11-20 15:16:34.820044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.176 [2024-11-20 15:16:34.820141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:34.176 [2024-11-20 15:16:34.820169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.400 ms 00:23:34.176 [2024-11-20 15:16:34.820181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.176 [2024-11-20 15:16:34.836480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.176 [2024-11-20 15:16:34.836568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:34.176 [2024-11-20 15:16:34.836594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.216 ms 00:23:34.176 [2024-11-20 15:16:34.836605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.176 [2024-11-20 15:16:34.853129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.176 [2024-11-20 15:16:34.853219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:34.176 [2024-11-20 15:16:34.853244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.407 ms 00:23:34.176 [2024-11-20 15:16:34.853255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.176 [2024-11-20 15:16:34.853339] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:34.176 [2024-11-20 15:16:34.853365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:34.176 [2024-11-20 15:16:34.853385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:34.176 [2024-11-20 15:16:34.853397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:34.176 [2024-11-20 15:16:34.853415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:34.176 [2024-11-20 15:16:34.853426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:34.176 [2024-11-20 15:16:34.853451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:34.176 [2024-11-20 15:16:34.853463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:34.176 [2024-11-20 15:16:34.853480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:34.176 [2024-11-20 15:16:34.853491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:34.176 [2024-11-20 15:16:34.853507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:34.176 [2024-11-20 15:16:34.853518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:34.176 [2024-11-20 15:16:34.853535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:34.176 [2024-11-20 15:16:34.853546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:34.176 [2024-11-20 15:16:34.853563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:34.176 [2024-11-20 15:16:34.853574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:34.176 [2024-11-20 15:16:34.853598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:34.176 [2024-11-20 15:16:34.853610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:34.176 [2024-11-20 15:16:34.853630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:34.176 [2024-11-20 15:16:34.853641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:34.176 [2024-11-20 15:16:34.853659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:34.176 [2024-11-20 15:16:34.853670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:34.176 [2024-11-20 15:16:34.853691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:34.176 [2024-11-20 15:16:34.853703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:34.176 [2024-11-20 15:16:34.853737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:34.176 [2024-11-20 15:16:34.853750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:34.176 [2024-11-20 15:16:34.853768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:34.176 [2024-11-20 15:16:34.853780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:34.176 [2024-11-20 15:16:34.853796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:34.176 [2024-11-20 15:16:34.853808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:34.176 [2024-11-20 15:16:34.853825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:34.176 [2024-11-20 15:16:34.853839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:34.176 [2024-11-20 15:16:34.853856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:34.176 [2024-11-20 15:16:34.853867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:34.176 [2024-11-20 15:16:34.853885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:34.176 [2024-11-20 15:16:34.853897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:34.176 [2024-11-20 15:16:34.853913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:34.176 [2024-11-20 15:16:34.853924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:34.176 [2024-11-20 15:16:34.853946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:34.176 [2024-11-20 15:16:34.853958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:34.176 [2024-11-20 15:16:34.853975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:34.176 [2024-11-20 15:16:34.853987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:34.176 [2024-11-20 15:16:34.854004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:34.176 [2024-11-20 15:16:34.854015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:34.176 [2024-11-20 15:16:34.854034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:34.177 [2024-11-20 15:16:34.854045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:34.177 [2024-11-20 15:16:34.854062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:34.177 [2024-11-20 15:16:34.854073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:34.177 [2024-11-20 15:16:34.854092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:34.177 [2024-11-20 15:16:34.854103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:34.177 [2024-11-20 15:16:34.854119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:34.177 [2024-11-20 15:16:34.854130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:34.177 [2024-11-20 15:16:34.854146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:34.177 [2024-11-20 15:16:34.854157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:34.177 [2024-11-20 15:16:34.854179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:34.177 [2024-11-20 15:16:34.854191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:34.177 [2024-11-20 15:16:34.854208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:34.177 [2024-11-20 15:16:34.854219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:34.177 [2024-11-20 15:16:34.854236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:34.177 [2024-11-20 15:16:34.854247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:34.177 [2024-11-20 15:16:34.854265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:34.177 [2024-11-20 15:16:34.854276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:34.177 [2024-11-20 15:16:34.854294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:34.177 [2024-11-20 15:16:34.854306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:34.177 [2024-11-20 15:16:34.854324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:34.177 [2024-11-20 15:16:34.854335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:34.177 [2024-11-20 15:16:34.854354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:34.177 [2024-11-20 15:16:34.854365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:34.177 [2024-11-20 15:16:34.854382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:34.177 [2024-11-20 15:16:34.854393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:34.177 [2024-11-20 15:16:34.854417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:34.177 [2024-11-20 15:16:34.854429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:34.177 [2024-11-20 15:16:34.854446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:34.177 [2024-11-20 15:16:34.854457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:34.177 [2024-11-20 15:16:34.854473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:34.177 [2024-11-20 15:16:34.854485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:34.177 [2024-11-20 15:16:34.854501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:34.177 [2024-11-20 15:16:34.854512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:34.177 [2024-11-20 15:16:34.854529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:34.177 [2024-11-20 15:16:34.854540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:34.177 [2024-11-20 15:16:34.854557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:34.177 [2024-11-20 15:16:34.854569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:34.177 [2024-11-20 15:16:34.854585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:34.177 [2024-11-20 15:16:34.854597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:34.177 [2024-11-20 15:16:34.854613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:34.177 [2024-11-20 15:16:34.854625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:34.177 [2024-11-20 15:16:34.854648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:34.177 [2024-11-20 15:16:34.854659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:34.177 [2024-11-20 15:16:34.854675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:34.177 [2024-11-20 15:16:34.854687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:34.177 [2024-11-20 15:16:34.854704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:34.177 [2024-11-20 15:16:34.854715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:34.177 [2024-11-20 15:16:34.854740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:34.177 [2024-11-20 15:16:34.854751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:34.177 [2024-11-20 15:16:34.854769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:34.177 [2024-11-20 15:16:34.854781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:34.177 [2024-11-20 15:16:34.854798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:34.177 [2024-11-20 15:16:34.854810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:34.177 [2024-11-20 15:16:34.854829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:34.177 [2024-11-20 15:16:34.854840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:34.177 [2024-11-20 15:16:34.854857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:34.177 [2024-11-20 15:16:34.854877] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:34.177 [2024-11-20 15:16:34.854906] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9ea78f07-8cfa-4fd3-80ff-f63c8fb0b0f1 00:23:34.177 [2024-11-20 15:16:34.854939] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:34.177 [2024-11-20 15:16:34.854964] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:34.177 [2024-11-20 15:16:34.854974] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:34.177 [2024-11-20 15:16:34.854991] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:34.177 [2024-11-20 15:16:34.855002] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:34.177 [2024-11-20 15:16:34.855019] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:34.177 [2024-11-20 15:16:34.855030] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:34.177 [2024-11-20 15:16:34.855044] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:34.177 [2024-11-20 15:16:34.855054] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:34.177 [2024-11-20 15:16:34.855071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.177 [2024-11-20 15:16:34.855083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:34.177 [2024-11-20 15:16:34.855100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.739 ms 00:23:34.177 [2024-11-20 15:16:34.855111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.177 [2024-11-20 15:16:34.877383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.177 [2024-11-20 15:16:34.877474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:34.177 [2024-11-20 15:16:34.877507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.238 ms 00:23:34.177 [2024-11-20 15:16:34.877521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.177 [2024-11-20 15:16:34.878263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.177 [2024-11-20 15:16:34.878301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:34.177 [2024-11-20 15:16:34.878322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.587 ms 00:23:34.177 [2024-11-20 15:16:34.878346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.177 [2024-11-20 15:16:34.951544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:34.177 [2024-11-20 15:16:34.951634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:34.177 [2024-11-20 15:16:34.951658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:34.177 [2024-11-20 15:16:34.951669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.177 [2024-11-20 15:16:34.951903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:34.177 [2024-11-20 15:16:34.951919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:34.177 [2024-11-20 15:16:34.951936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:34.177 [2024-11-20 15:16:34.951954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.177 [2024-11-20 15:16:34.952027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:34.177 [2024-11-20 15:16:34.952040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:34.177 [2024-11-20 15:16:34.952064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:34.177 [2024-11-20 15:16:34.952074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.177 [2024-11-20 15:16:34.952103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:34.177 [2024-11-20 15:16:34.952114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:34.177 [2024-11-20 15:16:34.952131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:34.177 [2024-11-20 15:16:34.952142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.437 [2024-11-20 15:16:35.088683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:34.437 [2024-11-20 15:16:35.088775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:34.437 [2024-11-20 15:16:35.088801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:34.437 [2024-11-20 15:16:35.088813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.437 [2024-11-20 15:16:35.195171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:34.437 [2024-11-20 15:16:35.195261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:34.437 [2024-11-20 15:16:35.195285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:34.437 [2024-11-20 15:16:35.195304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.437 [2024-11-20 15:16:35.195467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:34.437 [2024-11-20 15:16:35.195481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:34.437 [2024-11-20 15:16:35.195505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:34.437 [2024-11-20 15:16:35.195516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.437 [2024-11-20 15:16:35.195556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:34.437 [2024-11-20 15:16:35.195568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:34.437 [2024-11-20 15:16:35.195584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:34.437 [2024-11-20 15:16:35.195595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.437 [2024-11-20 15:16:35.195765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:34.437 [2024-11-20 15:16:35.195781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:34.437 [2024-11-20 15:16:35.195798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:34.437 [2024-11-20 15:16:35.195808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.437 [2024-11-20 15:16:35.195861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:34.437 [2024-11-20 15:16:35.195874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:34.437 [2024-11-20 15:16:35.195891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:34.437 [2024-11-20 15:16:35.195901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.437 [2024-11-20 15:16:35.195963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:34.437 [2024-11-20 15:16:35.195992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:34.437 [2024-11-20 15:16:35.196016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:34.437 [2024-11-20 15:16:35.196026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.437 [2024-11-20 15:16:35.196088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:34.437 [2024-11-20 15:16:35.196106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:34.437 [2024-11-20 15:16:35.196123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:34.437 [2024-11-20 15:16:35.196135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.437 [2024-11-20 15:16:35.196322] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 450.725 ms, result 0 00:23:35.813 15:16:36 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:35.813 [2024-11-20 15:16:36.433611] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:23:35.813 [2024-11-20 15:16:36.433774] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79124 ] 00:23:35.813 [2024-11-20 15:16:36.618501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.071 [2024-11-20 15:16:36.759627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:36.639 [2024-11-20 15:16:37.186770] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:36.639 [2024-11-20 15:16:37.186859] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:36.639 [2024-11-20 15:16:37.353508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.639 [2024-11-20 15:16:37.353585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:36.639 [2024-11-20 15:16:37.353611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:36.639 [2024-11-20 15:16:37.353623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.639 [2024-11-20 15:16:37.357092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.639 [2024-11-20 15:16:37.357134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:36.639 [2024-11-20 15:16:37.357148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.451 ms 00:23:36.639 [2024-11-20 15:16:37.357159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.639 [2024-11-20 15:16:37.357265] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:36.639 [2024-11-20 15:16:37.358254] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:36.639 [2024-11-20 15:16:37.358289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.639 [2024-11-20 15:16:37.358301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:36.639 [2024-11-20 15:16:37.358313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.036 ms 00:23:36.639 [2024-11-20 15:16:37.358323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.639 [2024-11-20 15:16:37.360811] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:36.639 [2024-11-20 15:16:37.381531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.639 [2024-11-20 15:16:37.381579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:36.639 [2024-11-20 15:16:37.381601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.754 ms 00:23:36.639 [2024-11-20 15:16:37.381612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.639 [2024-11-20 15:16:37.381735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.639 [2024-11-20 15:16:37.381751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:36.639 [2024-11-20 15:16:37.381763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:23:36.639 [2024-11-20 15:16:37.381774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.639 [2024-11-20 15:16:37.393799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.639 [2024-11-20 15:16:37.393850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:36.639 [2024-11-20 15:16:37.393866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.994 ms 00:23:36.639 [2024-11-20 15:16:37.393876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.639 [2024-11-20 15:16:37.394050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.639 [2024-11-20 15:16:37.394070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:36.639 [2024-11-20 15:16:37.394092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:23:36.639 [2024-11-20 15:16:37.394103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.639 [2024-11-20 15:16:37.394139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.639 [2024-11-20 15:16:37.394156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:36.639 [2024-11-20 15:16:37.394166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:36.639 [2024-11-20 15:16:37.394178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.639 [2024-11-20 15:16:37.394209] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:36.639 [2024-11-20 15:16:37.399882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.639 [2024-11-20 15:16:37.399915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:36.639 [2024-11-20 15:16:37.399928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.693 ms 00:23:36.639 [2024-11-20 15:16:37.399939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.639 [2024-11-20 15:16:37.400002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.639 [2024-11-20 15:16:37.400015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:36.639 [2024-11-20 15:16:37.400026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:36.639 [2024-11-20 15:16:37.400036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.639 [2024-11-20 15:16:37.400060] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:36.639 [2024-11-20 15:16:37.400091] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:36.639 [2024-11-20 15:16:37.400133] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:36.639 [2024-11-20 15:16:37.400153] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:36.639 [2024-11-20 15:16:37.400249] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:36.639 [2024-11-20 15:16:37.400263] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:36.639 [2024-11-20 15:16:37.400277] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:36.639 [2024-11-20 15:16:37.400291] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:36.639 [2024-11-20 15:16:37.400308] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:36.639 [2024-11-20 15:16:37.400320] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:36.639 [2024-11-20 15:16:37.400331] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:36.639 [2024-11-20 15:16:37.400342] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:36.639 [2024-11-20 15:16:37.400353] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:36.639 [2024-11-20 15:16:37.400364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.639 [2024-11-20 15:16:37.400375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:36.639 [2024-11-20 15:16:37.400385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:23:36.639 [2024-11-20 15:16:37.400395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.639 [2024-11-20 15:16:37.400474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.639 [2024-11-20 15:16:37.400490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:36.639 [2024-11-20 15:16:37.400501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:23:36.639 [2024-11-20 15:16:37.400511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.639 [2024-11-20 15:16:37.400610] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:36.639 [2024-11-20 15:16:37.400627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:36.639 [2024-11-20 15:16:37.400639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:36.639 [2024-11-20 15:16:37.400650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.639 [2024-11-20 15:16:37.400661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:36.639 [2024-11-20 15:16:37.400671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:36.639 [2024-11-20 15:16:37.400680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:36.639 [2024-11-20 15:16:37.400690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:36.639 [2024-11-20 15:16:37.400700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:36.639 [2024-11-20 15:16:37.400709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:36.639 [2024-11-20 15:16:37.400729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:36.639 [2024-11-20 15:16:37.400743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:36.639 [2024-11-20 15:16:37.400752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:36.639 [2024-11-20 15:16:37.400775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:36.639 [2024-11-20 15:16:37.400785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:36.639 [2024-11-20 15:16:37.400794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.639 [2024-11-20 15:16:37.400803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:36.639 [2024-11-20 15:16:37.400813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:36.639 [2024-11-20 15:16:37.400823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.639 [2024-11-20 15:16:37.400832] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:36.639 [2024-11-20 15:16:37.400842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:36.639 [2024-11-20 15:16:37.400851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:36.639 [2024-11-20 15:16:37.400861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:36.639 [2024-11-20 15:16:37.400870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:36.639 [2024-11-20 15:16:37.400879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:36.639 [2024-11-20 15:16:37.400889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:36.639 [2024-11-20 15:16:37.400898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:36.639 [2024-11-20 15:16:37.400906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:36.639 [2024-11-20 15:16:37.400916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:36.639 [2024-11-20 15:16:37.400926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:36.639 [2024-11-20 15:16:37.400935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:36.639 [2024-11-20 15:16:37.400943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:36.639 [2024-11-20 15:16:37.400952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:36.639 [2024-11-20 15:16:37.400961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:36.639 [2024-11-20 15:16:37.400970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:36.639 [2024-11-20 15:16:37.400979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:36.639 [2024-11-20 15:16:37.400988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:36.639 [2024-11-20 15:16:37.400997] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:36.639 [2024-11-20 15:16:37.401006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:36.639 [2024-11-20 15:16:37.401015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.639 [2024-11-20 15:16:37.401024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:36.639 [2024-11-20 15:16:37.401033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:36.639 [2024-11-20 15:16:37.401042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.639 [2024-11-20 15:16:37.401051] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:36.639 [2024-11-20 15:16:37.401062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:36.639 [2024-11-20 15:16:37.401072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:36.639 [2024-11-20 15:16:37.401087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.639 [2024-11-20 15:16:37.401098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:36.639 [2024-11-20 15:16:37.401108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:36.639 [2024-11-20 15:16:37.401118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:36.639 [2024-11-20 15:16:37.401127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:36.639 [2024-11-20 15:16:37.401137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:36.639 [2024-11-20 15:16:37.401146] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:36.639 [2024-11-20 15:16:37.401157] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:36.639 [2024-11-20 15:16:37.401170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:36.639 [2024-11-20 15:16:37.401182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:36.639 [2024-11-20 15:16:37.401192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:36.639 [2024-11-20 15:16:37.401202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:36.639 [2024-11-20 15:16:37.401213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:36.639 [2024-11-20 15:16:37.401223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:36.639 [2024-11-20 15:16:37.401234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:36.639 [2024-11-20 15:16:37.401245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:36.639 [2024-11-20 15:16:37.401255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:36.640 [2024-11-20 15:16:37.401265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:36.640 [2024-11-20 15:16:37.401275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:36.640 [2024-11-20 15:16:37.401286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:36.640 [2024-11-20 15:16:37.401296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:36.640 [2024-11-20 15:16:37.401307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:36.640 [2024-11-20 15:16:37.401317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:36.640 [2024-11-20 15:16:37.401328] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:36.640 [2024-11-20 15:16:37.401340] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:36.640 [2024-11-20 15:16:37.401352] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:36.640 [2024-11-20 15:16:37.401362] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:36.640 [2024-11-20 15:16:37.401372] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:36.640 [2024-11-20 15:16:37.401382] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:36.640 [2024-11-20 15:16:37.401395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.640 [2024-11-20 15:16:37.401406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:36.640 [2024-11-20 15:16:37.401422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.842 ms 00:23:36.640 [2024-11-20 15:16:37.401432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.640 [2024-11-20 15:16:37.450829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.640 [2024-11-20 15:16:37.450880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:36.640 [2024-11-20 15:16:37.450897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.409 ms 00:23:36.640 [2024-11-20 15:16:37.450910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.640 [2024-11-20 15:16:37.451108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.640 [2024-11-20 15:16:37.451123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:36.640 [2024-11-20 15:16:37.451135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:23:36.640 [2024-11-20 15:16:37.451146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.899 [2024-11-20 15:16:37.517456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.899 [2024-11-20 15:16:37.517524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:36.899 [2024-11-20 15:16:37.517547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.386 ms 00:23:36.899 [2024-11-20 15:16:37.517559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.899 [2024-11-20 15:16:37.517716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.899 [2024-11-20 15:16:37.517740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:36.899 [2024-11-20 15:16:37.517752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:36.899 [2024-11-20 15:16:37.517763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.899 [2024-11-20 15:16:37.518466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.899 [2024-11-20 15:16:37.518489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:36.899 [2024-11-20 15:16:37.518500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.679 ms 00:23:36.899 [2024-11-20 15:16:37.518519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.899 [2024-11-20 15:16:37.518666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.899 [2024-11-20 15:16:37.518681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:36.899 [2024-11-20 15:16:37.518692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:23:36.899 [2024-11-20 15:16:37.518702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.899 [2024-11-20 15:16:37.542679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.899 [2024-11-20 15:16:37.542737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:36.899 [2024-11-20 15:16:37.542753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.972 ms 00:23:36.899 [2024-11-20 15:16:37.542765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.899 [2024-11-20 15:16:37.566697] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:36.899 [2024-11-20 15:16:37.566779] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:36.899 [2024-11-20 15:16:37.566802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.899 [2024-11-20 15:16:37.566814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:36.899 [2024-11-20 15:16:37.566829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.900 ms 00:23:36.899 [2024-11-20 15:16:37.566840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.899 [2024-11-20 15:16:37.598423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.899 [2024-11-20 15:16:37.598491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:36.899 [2024-11-20 15:16:37.598509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.500 ms 00:23:36.899 [2024-11-20 15:16:37.598527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.899 [2024-11-20 15:16:37.617162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.899 [2024-11-20 15:16:37.617204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:36.899 [2024-11-20 15:16:37.617219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.556 ms 00:23:36.899 [2024-11-20 15:16:37.617230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.899 [2024-11-20 15:16:37.635469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.899 [2024-11-20 15:16:37.635509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:36.899 [2024-11-20 15:16:37.635524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.176 ms 00:23:36.899 [2024-11-20 15:16:37.635535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.899 [2024-11-20 15:16:37.636354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.899 [2024-11-20 15:16:37.636390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:36.899 [2024-11-20 15:16:37.636404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.702 ms 00:23:36.899 [2024-11-20 15:16:37.636420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.171 [2024-11-20 15:16:37.734980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.171 [2024-11-20 15:16:37.735063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:37.171 [2024-11-20 15:16:37.735083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.681 ms 00:23:37.171 [2024-11-20 15:16:37.735096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.171 [2024-11-20 15:16:37.747973] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:37.171 [2024-11-20 15:16:37.773987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.171 [2024-11-20 15:16:37.774062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:37.171 [2024-11-20 15:16:37.774082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.777 ms 00:23:37.171 [2024-11-20 15:16:37.774104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.171 [2024-11-20 15:16:37.774301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.171 [2024-11-20 15:16:37.774317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:37.171 [2024-11-20 15:16:37.774329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:37.171 [2024-11-20 15:16:37.774340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.171 [2024-11-20 15:16:37.774446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.171 [2024-11-20 15:16:37.774464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:37.171 [2024-11-20 15:16:37.774478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:23:37.171 [2024-11-20 15:16:37.774488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.171 [2024-11-20 15:16:37.774552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.171 [2024-11-20 15:16:37.774566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:37.171 [2024-11-20 15:16:37.774577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:23:37.171 [2024-11-20 15:16:37.774588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.171 [2024-11-20 15:16:37.774654] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:37.171 [2024-11-20 15:16:37.774669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.171 [2024-11-20 15:16:37.774681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:37.171 [2024-11-20 15:16:37.774692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:23:37.171 [2024-11-20 15:16:37.774702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.171 [2024-11-20 15:16:37.813504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.171 [2024-11-20 15:16:37.813577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:37.171 [2024-11-20 15:16:37.813606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.815 ms 00:23:37.171 [2024-11-20 15:16:37.813618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.171 [2024-11-20 15:16:37.813817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.171 [2024-11-20 15:16:37.813845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:37.171 [2024-11-20 15:16:37.813858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:23:37.171 [2024-11-20 15:16:37.813869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.171 [2024-11-20 15:16:37.815354] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:37.171 [2024-11-20 15:16:37.820844] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 462.100 ms, result 0 00:23:37.171 [2024-11-20 15:16:37.821816] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:37.171 [2024-11-20 15:16:37.841095] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:38.132  [2024-11-20T15:16:40.344Z] Copying: 31/256 [MB] (31 MBps) [2024-11-20T15:16:40.913Z] Copying: 59/256 [MB] (28 MBps) [2024-11-20T15:16:42.290Z] Copying: 88/256 [MB] (28 MBps) [2024-11-20T15:16:43.225Z] Copying: 115/256 [MB] (27 MBps) [2024-11-20T15:16:44.160Z] Copying: 142/256 [MB] (27 MBps) [2024-11-20T15:16:45.094Z] Copying: 169/256 [MB] (26 MBps) [2024-11-20T15:16:46.028Z] Copying: 195/256 [MB] (26 MBps) [2024-11-20T15:16:47.006Z] Copying: 221/256 [MB] (26 MBps) [2024-11-20T15:16:47.265Z] Copying: 248/256 [MB] (26 MBps) [2024-11-20T15:16:47.832Z] Copying: 256/256 [MB] (average 27 MBps)[2024-11-20 15:16:47.644631] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:46.996 [2024-11-20 15:16:47.663232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.996 [2024-11-20 15:16:47.663308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:46.996 [2024-11-20 15:16:47.663327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:46.997 [2024-11-20 15:16:47.663352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.997 [2024-11-20 15:16:47.663388] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:46.997 [2024-11-20 15:16:47.668638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.997 [2024-11-20 15:16:47.668681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:46.997 [2024-11-20 15:16:47.668695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.235 ms 00:23:46.997 [2024-11-20 15:16:47.668707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.997 [2024-11-20 15:16:47.669033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.997 [2024-11-20 15:16:47.669055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:46.997 [2024-11-20 15:16:47.669068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:23:46.997 [2024-11-20 15:16:47.669079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.997 [2024-11-20 15:16:47.672136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.997 [2024-11-20 15:16:47.672178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:46.997 [2024-11-20 15:16:47.672192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.042 ms 00:23:46.997 [2024-11-20 15:16:47.672203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.997 [2024-11-20 15:16:47.678665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.997 [2024-11-20 15:16:47.678727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:46.997 [2024-11-20 15:16:47.678742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.440 ms 00:23:46.997 [2024-11-20 15:16:47.678754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.997 [2024-11-20 15:16:47.723100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.997 [2024-11-20 15:16:47.723185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:46.997 [2024-11-20 15:16:47.723206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.305 ms 00:23:46.997 [2024-11-20 15:16:47.723217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.997 [2024-11-20 15:16:47.748348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.997 [2024-11-20 15:16:47.748449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:46.997 [2024-11-20 15:16:47.748480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.046 ms 00:23:46.997 [2024-11-20 15:16:47.748496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.997 [2024-11-20 15:16:47.748789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.997 [2024-11-20 15:16:47.748810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:46.997 [2024-11-20 15:16:47.748828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:23:46.997 [2024-11-20 15:16:47.748843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.997 [2024-11-20 15:16:47.787087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.997 [2024-11-20 15:16:47.787172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:46.997 [2024-11-20 15:16:47.787196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.260 ms 00:23:46.997 [2024-11-20 15:16:47.787212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.997 [2024-11-20 15:16:47.824042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.997 [2024-11-20 15:16:47.824122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:46.997 [2024-11-20 15:16:47.824145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.788 ms 00:23:46.997 [2024-11-20 15:16:47.824161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.256 [2024-11-20 15:16:47.861280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.256 [2024-11-20 15:16:47.861356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:47.256 [2024-11-20 15:16:47.861376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.086 ms 00:23:47.256 [2024-11-20 15:16:47.861386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.256 [2024-11-20 15:16:47.900439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.256 [2024-11-20 15:16:47.900499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:47.256 [2024-11-20 15:16:47.900518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.967 ms 00:23:47.256 [2024-11-20 15:16:47.900529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.256 [2024-11-20 15:16:47.900635] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:47.256 [2024-11-20 15:16:47.900659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:47.256 [2024-11-20 15:16:47.900674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:47.256 [2024-11-20 15:16:47.900686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:47.256 [2024-11-20 15:16:47.900699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:47.256 [2024-11-20 15:16:47.900711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.900723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.900745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.900758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.900770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.900783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.900795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.900807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.900818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.900830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.900842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.900853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.900865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.900876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.900887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.900899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.900910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.900922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.900934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.900945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.900956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.900968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.900979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.900990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:47.257 [2024-11-20 15:16:47.901754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:47.258 [2024-11-20 15:16:47.901766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:47.258 [2024-11-20 15:16:47.901778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:47.258 [2024-11-20 15:16:47.901790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:47.258 [2024-11-20 15:16:47.901817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:47.258 [2024-11-20 15:16:47.901829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:47.258 [2024-11-20 15:16:47.901840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:47.258 [2024-11-20 15:16:47.901852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:47.258 [2024-11-20 15:16:47.901864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:47.258 [2024-11-20 15:16:47.901883] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:47.258 [2024-11-20 15:16:47.901896] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9ea78f07-8cfa-4fd3-80ff-f63c8fb0b0f1 00:23:47.258 [2024-11-20 15:16:47.901909] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:47.258 [2024-11-20 15:16:47.901921] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:47.258 [2024-11-20 15:16:47.901932] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:47.258 [2024-11-20 15:16:47.901945] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:47.258 [2024-11-20 15:16:47.901956] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:47.258 [2024-11-20 15:16:47.901968] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:47.258 [2024-11-20 15:16:47.901979] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:47.258 [2024-11-20 15:16:47.901989] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:47.258 [2024-11-20 15:16:47.902000] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:47.258 [2024-11-20 15:16:47.902012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.258 [2024-11-20 15:16:47.902029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:47.258 [2024-11-20 15:16:47.902042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.381 ms 00:23:47.258 [2024-11-20 15:16:47.902053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.258 [2024-11-20 15:16:47.923050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.258 [2024-11-20 15:16:47.923098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:47.258 [2024-11-20 15:16:47.923114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.991 ms 00:23:47.258 [2024-11-20 15:16:47.923125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.258 [2024-11-20 15:16:47.923773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.258 [2024-11-20 15:16:47.923799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:47.258 [2024-11-20 15:16:47.923811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.588 ms 00:23:47.258 [2024-11-20 15:16:47.923822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.258 [2024-11-20 15:16:47.981901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:47.258 [2024-11-20 15:16:47.981970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:47.258 [2024-11-20 15:16:47.981988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:47.258 [2024-11-20 15:16:47.982000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.258 [2024-11-20 15:16:47.982184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:47.258 [2024-11-20 15:16:47.982202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:47.258 [2024-11-20 15:16:47.982214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:47.258 [2024-11-20 15:16:47.982225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.258 [2024-11-20 15:16:47.982290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:47.258 [2024-11-20 15:16:47.982304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:47.258 [2024-11-20 15:16:47.982315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:47.258 [2024-11-20 15:16:47.982325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.258 [2024-11-20 15:16:47.982347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:47.258 [2024-11-20 15:16:47.982364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:47.258 [2024-11-20 15:16:47.982375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:47.258 [2024-11-20 15:16:47.982386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.516 [2024-11-20 15:16:48.118317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:47.516 [2024-11-20 15:16:48.118409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:47.516 [2024-11-20 15:16:48.118428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:47.516 [2024-11-20 15:16:48.118439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.516 [2024-11-20 15:16:48.224868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:47.516 [2024-11-20 15:16:48.224954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:47.516 [2024-11-20 15:16:48.224972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:47.516 [2024-11-20 15:16:48.224983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.517 [2024-11-20 15:16:48.225111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:47.517 [2024-11-20 15:16:48.225123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:47.517 [2024-11-20 15:16:48.225135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:47.517 [2024-11-20 15:16:48.225147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.517 [2024-11-20 15:16:48.225180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:47.517 [2024-11-20 15:16:48.225192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:47.517 [2024-11-20 15:16:48.225211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:47.517 [2024-11-20 15:16:48.225222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.517 [2024-11-20 15:16:48.225370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:47.517 [2024-11-20 15:16:48.225385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:47.517 [2024-11-20 15:16:48.225398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:47.517 [2024-11-20 15:16:48.225409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.517 [2024-11-20 15:16:48.225450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:47.517 [2024-11-20 15:16:48.225463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:47.517 [2024-11-20 15:16:48.225474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:47.517 [2024-11-20 15:16:48.225489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.517 [2024-11-20 15:16:48.225563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:47.517 [2024-11-20 15:16:48.225578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:47.517 [2024-11-20 15:16:48.225597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:47.517 [2024-11-20 15:16:48.225607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.517 [2024-11-20 15:16:48.225664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:47.517 [2024-11-20 15:16:48.225680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:47.517 [2024-11-20 15:16:48.225699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:47.517 [2024-11-20 15:16:48.225710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.517 [2024-11-20 15:16:48.225917] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 563.600 ms, result 0 00:23:48.910 00:23:48.910 00:23:48.910 15:16:49 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:49.168 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:23:49.168 15:16:49 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:23:49.168 15:16:49 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:23:49.168 15:16:49 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:49.168 15:16:49 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:49.168 15:16:49 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:23:49.168 15:16:49 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:23:49.168 15:16:49 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 79059 00:23:49.168 15:16:49 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 79059 ']' 00:23:49.168 Process with pid 79059 is not found 00:23:49.168 15:16:49 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 79059 00:23:49.168 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79059) - No such process 00:23:49.168 15:16:49 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 79059 is not found' 00:23:49.168 00:23:49.168 real 1m12.882s 00:23:49.168 user 1m38.607s 00:23:49.168 sys 0m8.756s 00:23:49.168 15:16:49 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:49.168 ************************************ 00:23:49.168 END TEST ftl_trim 00:23:49.168 ************************************ 00:23:49.168 15:16:49 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:23:49.426 15:16:50 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:23:49.426 15:16:50 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:49.426 15:16:50 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:49.426 15:16:50 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:49.426 ************************************ 00:23:49.426 START TEST ftl_restore 00:23:49.426 ************************************ 00:23:49.426 15:16:50 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:23:49.426 * Looking for test storage... 00:23:49.426 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:49.426 15:16:50 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:49.426 15:16:50 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:49.426 15:16:50 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:23:49.686 15:16:50 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:49.686 15:16:50 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:49.686 15:16:50 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:49.686 15:16:50 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:49.686 15:16:50 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:23:49.686 15:16:50 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:23:49.686 15:16:50 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:23:49.686 15:16:50 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:23:49.686 15:16:50 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:23:49.686 15:16:50 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:23:49.686 15:16:50 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:23:49.686 15:16:50 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:49.686 15:16:50 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:23:49.686 15:16:50 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:23:49.686 15:16:50 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:49.686 15:16:50 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:49.686 15:16:50 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:23:49.686 15:16:50 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:23:49.686 15:16:50 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:49.686 15:16:50 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:23:49.686 15:16:50 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:23:49.686 15:16:50 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:23:49.686 15:16:50 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:23:49.686 15:16:50 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:49.686 15:16:50 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:23:49.686 15:16:50 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:23:49.686 15:16:50 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:49.686 15:16:50 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:49.686 15:16:50 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:23:49.686 15:16:50 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:49.686 15:16:50 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:49.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.686 --rc genhtml_branch_coverage=1 00:23:49.686 --rc genhtml_function_coverage=1 00:23:49.686 --rc genhtml_legend=1 00:23:49.686 --rc geninfo_all_blocks=1 00:23:49.686 --rc geninfo_unexecuted_blocks=1 00:23:49.686 00:23:49.686 ' 00:23:49.686 15:16:50 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:49.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.686 --rc genhtml_branch_coverage=1 00:23:49.686 --rc genhtml_function_coverage=1 00:23:49.686 --rc genhtml_legend=1 00:23:49.686 --rc geninfo_all_blocks=1 00:23:49.686 --rc geninfo_unexecuted_blocks=1 00:23:49.686 00:23:49.686 ' 00:23:49.686 15:16:50 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:49.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.686 --rc genhtml_branch_coverage=1 00:23:49.686 --rc genhtml_function_coverage=1 00:23:49.686 --rc genhtml_legend=1 00:23:49.686 --rc geninfo_all_blocks=1 00:23:49.686 --rc geninfo_unexecuted_blocks=1 00:23:49.686 00:23:49.686 ' 00:23:49.686 15:16:50 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:49.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.686 --rc genhtml_branch_coverage=1 00:23:49.686 --rc genhtml_function_coverage=1 00:23:49.686 --rc genhtml_legend=1 00:23:49.686 --rc geninfo_all_blocks=1 00:23:49.686 --rc geninfo_unexecuted_blocks=1 00:23:49.686 00:23:49.686 ' 00:23:49.686 15:16:50 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:49.686 15:16:50 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:23:49.686 15:16:50 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:49.686 15:16:50 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:49.686 15:16:50 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:49.686 15:16:50 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:49.686 15:16:50 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:49.686 15:16:50 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:49.686 15:16:50 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:49.686 15:16:50 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:49.686 15:16:50 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:49.686 15:16:50 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:49.686 15:16:50 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:49.686 15:16:50 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:49.686 15:16:50 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:49.686 15:16:50 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:49.686 15:16:50 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:49.686 15:16:50 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:49.686 15:16:50 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:49.686 15:16:50 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:49.686 15:16:50 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:49.686 15:16:50 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:49.686 15:16:50 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:49.686 15:16:50 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:49.687 15:16:50 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:49.687 15:16:50 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:49.687 15:16:50 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:49.687 15:16:50 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:49.687 15:16:50 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:49.687 15:16:50 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:49.687 15:16:50 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:23:49.687 15:16:50 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.DNQE5peJRR 00:23:49.687 15:16:50 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:23:49.687 15:16:50 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:23:49.687 15:16:50 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:23:49.687 15:16:50 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:23:49.687 15:16:50 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:23:49.687 15:16:50 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:23:49.687 15:16:50 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:23:49.687 15:16:50 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:23:49.687 15:16:50 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=79337 00:23:49.687 15:16:50 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 79337 00:23:49.687 15:16:50 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:49.687 15:16:50 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 79337 ']' 00:23:49.687 15:16:50 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:49.687 15:16:50 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:49.687 15:16:50 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:49.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:49.687 15:16:50 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:49.687 15:16:50 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:23:49.687 [2024-11-20 15:16:50.470674] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:23:49.687 [2024-11-20 15:16:50.471048] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79337 ] 00:23:49.945 [2024-11-20 15:16:50.660940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.203 [2024-11-20 15:16:50.806144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:51.145 15:16:51 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:51.145 15:16:51 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:23:51.145 15:16:51 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:51.145 15:16:51 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:23:51.145 15:16:51 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:51.145 15:16:51 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:23:51.145 15:16:51 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:23:51.145 15:16:51 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:51.405 15:16:52 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:51.405 15:16:52 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:23:51.405 15:16:52 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:51.405 15:16:52 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:23:51.405 15:16:52 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:51.405 15:16:52 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:23:51.405 15:16:52 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:23:51.405 15:16:52 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:51.664 15:16:52 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:51.664 { 00:23:51.664 "name": "nvme0n1", 00:23:51.664 "aliases": [ 00:23:51.664 "6d7bc597-fc93-4129-ac86-782d8dfa9994" 00:23:51.664 ], 00:23:51.664 "product_name": "NVMe disk", 00:23:51.664 "block_size": 4096, 00:23:51.664 "num_blocks": 1310720, 00:23:51.664 "uuid": "6d7bc597-fc93-4129-ac86-782d8dfa9994", 00:23:51.664 "numa_id": -1, 00:23:51.664 "assigned_rate_limits": { 00:23:51.664 "rw_ios_per_sec": 0, 00:23:51.664 "rw_mbytes_per_sec": 0, 00:23:51.664 "r_mbytes_per_sec": 0, 00:23:51.664 "w_mbytes_per_sec": 0 00:23:51.664 }, 00:23:51.664 "claimed": true, 00:23:51.664 "claim_type": "read_many_write_one", 00:23:51.664 "zoned": false, 00:23:51.664 "supported_io_types": { 00:23:51.664 "read": true, 00:23:51.664 "write": true, 00:23:51.664 "unmap": true, 00:23:51.664 "flush": true, 00:23:51.664 "reset": true, 00:23:51.664 "nvme_admin": true, 00:23:51.664 "nvme_io": true, 00:23:51.664 "nvme_io_md": false, 00:23:51.664 "write_zeroes": true, 00:23:51.664 "zcopy": false, 00:23:51.664 "get_zone_info": false, 00:23:51.664 "zone_management": false, 00:23:51.664 "zone_append": false, 00:23:51.664 "compare": true, 00:23:51.664 "compare_and_write": false, 00:23:51.664 "abort": true, 00:23:51.664 "seek_hole": false, 00:23:51.664 "seek_data": false, 00:23:51.664 "copy": true, 00:23:51.664 "nvme_iov_md": false 00:23:51.664 }, 00:23:51.664 "driver_specific": { 00:23:51.664 "nvme": [ 00:23:51.664 { 00:23:51.664 "pci_address": "0000:00:11.0", 00:23:51.664 "trid": { 00:23:51.664 "trtype": "PCIe", 00:23:51.664 "traddr": "0000:00:11.0" 00:23:51.664 }, 00:23:51.664 "ctrlr_data": { 00:23:51.664 "cntlid": 0, 00:23:51.664 "vendor_id": "0x1b36", 00:23:51.664 "model_number": "QEMU NVMe Ctrl", 00:23:51.664 "serial_number": "12341", 00:23:51.664 "firmware_revision": "8.0.0", 00:23:51.664 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:51.664 "oacs": { 00:23:51.664 "security": 0, 00:23:51.664 "format": 1, 00:23:51.664 "firmware": 0, 00:23:51.664 "ns_manage": 1 00:23:51.664 }, 00:23:51.664 "multi_ctrlr": false, 00:23:51.664 "ana_reporting": false 00:23:51.664 }, 00:23:51.664 "vs": { 00:23:51.664 "nvme_version": "1.4" 00:23:51.664 }, 00:23:51.664 "ns_data": { 00:23:51.664 "id": 1, 00:23:51.664 "can_share": false 00:23:51.664 } 00:23:51.664 } 00:23:51.664 ], 00:23:51.664 "mp_policy": "active_passive" 00:23:51.664 } 00:23:51.664 } 00:23:51.664 ]' 00:23:51.664 15:16:52 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:51.664 15:16:52 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:23:51.664 15:16:52 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:51.664 15:16:52 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:23:51.664 15:16:52 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:23:51.664 15:16:52 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:23:51.664 15:16:52 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:23:51.664 15:16:52 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:51.664 15:16:52 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:23:51.664 15:16:52 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:51.664 15:16:52 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:51.924 15:16:52 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=15b83965-7e50-4b17-b046-a89e1ba8e36f 00:23:51.924 15:16:52 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:23:51.924 15:16:52 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 15b83965-7e50-4b17-b046-a89e1ba8e36f 00:23:52.183 15:16:52 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:52.442 15:16:53 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=ae23ae9c-c3cd-49de-b25c-d81dde85780d 00:23:52.442 15:16:53 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ae23ae9c-c3cd-49de-b25c-d81dde85780d 00:23:52.701 15:16:53 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=0730f254-dc8b-4563-825e-99cb77aa1e08 00:23:52.701 15:16:53 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:23:52.701 15:16:53 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 0730f254-dc8b-4563-825e-99cb77aa1e08 00:23:52.701 15:16:53 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:23:52.701 15:16:53 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:52.701 15:16:53 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=0730f254-dc8b-4563-825e-99cb77aa1e08 00:23:52.701 15:16:53 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:23:52.701 15:16:53 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 0730f254-dc8b-4563-825e-99cb77aa1e08 00:23:52.701 15:16:53 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=0730f254-dc8b-4563-825e-99cb77aa1e08 00:23:52.701 15:16:53 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:52.701 15:16:53 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:23:52.701 15:16:53 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:23:52.701 15:16:53 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0730f254-dc8b-4563-825e-99cb77aa1e08 00:23:52.961 15:16:53 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:52.961 { 00:23:52.961 "name": "0730f254-dc8b-4563-825e-99cb77aa1e08", 00:23:52.961 "aliases": [ 00:23:52.961 "lvs/nvme0n1p0" 00:23:52.961 ], 00:23:52.961 "product_name": "Logical Volume", 00:23:52.961 "block_size": 4096, 00:23:52.961 "num_blocks": 26476544, 00:23:52.961 "uuid": "0730f254-dc8b-4563-825e-99cb77aa1e08", 00:23:52.961 "assigned_rate_limits": { 00:23:52.961 "rw_ios_per_sec": 0, 00:23:52.961 "rw_mbytes_per_sec": 0, 00:23:52.961 "r_mbytes_per_sec": 0, 00:23:52.961 "w_mbytes_per_sec": 0 00:23:52.961 }, 00:23:52.961 "claimed": false, 00:23:52.961 "zoned": false, 00:23:52.961 "supported_io_types": { 00:23:52.961 "read": true, 00:23:52.961 "write": true, 00:23:52.961 "unmap": true, 00:23:52.961 "flush": false, 00:23:52.961 "reset": true, 00:23:52.961 "nvme_admin": false, 00:23:52.961 "nvme_io": false, 00:23:52.961 "nvme_io_md": false, 00:23:52.961 "write_zeroes": true, 00:23:52.961 "zcopy": false, 00:23:52.961 "get_zone_info": false, 00:23:52.961 "zone_management": false, 00:23:52.961 "zone_append": false, 00:23:52.961 "compare": false, 00:23:52.961 "compare_and_write": false, 00:23:52.961 "abort": false, 00:23:52.961 "seek_hole": true, 00:23:52.961 "seek_data": true, 00:23:52.961 "copy": false, 00:23:52.961 "nvme_iov_md": false 00:23:52.961 }, 00:23:52.961 "driver_specific": { 00:23:52.961 "lvol": { 00:23:52.961 "lvol_store_uuid": "ae23ae9c-c3cd-49de-b25c-d81dde85780d", 00:23:52.961 "base_bdev": "nvme0n1", 00:23:52.961 "thin_provision": true, 00:23:52.961 "num_allocated_clusters": 0, 00:23:52.961 "snapshot": false, 00:23:52.961 "clone": false, 00:23:52.961 "esnap_clone": false 00:23:52.961 } 00:23:52.961 } 00:23:52.961 } 00:23:52.961 ]' 00:23:52.961 15:16:53 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:52.961 15:16:53 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:23:52.961 15:16:53 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:52.961 15:16:53 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:52.961 15:16:53 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:52.961 15:16:53 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:23:52.961 15:16:53 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:23:52.961 15:16:53 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:23:52.961 15:16:53 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:53.220 15:16:53 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:53.220 15:16:54 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:53.220 15:16:54 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 0730f254-dc8b-4563-825e-99cb77aa1e08 00:23:53.220 15:16:54 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=0730f254-dc8b-4563-825e-99cb77aa1e08 00:23:53.220 15:16:54 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:53.220 15:16:54 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:23:53.220 15:16:54 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:23:53.220 15:16:54 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0730f254-dc8b-4563-825e-99cb77aa1e08 00:23:53.479 15:16:54 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:53.479 { 00:23:53.479 "name": "0730f254-dc8b-4563-825e-99cb77aa1e08", 00:23:53.479 "aliases": [ 00:23:53.479 "lvs/nvme0n1p0" 00:23:53.479 ], 00:23:53.479 "product_name": "Logical Volume", 00:23:53.479 "block_size": 4096, 00:23:53.479 "num_blocks": 26476544, 00:23:53.479 "uuid": "0730f254-dc8b-4563-825e-99cb77aa1e08", 00:23:53.479 "assigned_rate_limits": { 00:23:53.479 "rw_ios_per_sec": 0, 00:23:53.479 "rw_mbytes_per_sec": 0, 00:23:53.479 "r_mbytes_per_sec": 0, 00:23:53.479 "w_mbytes_per_sec": 0 00:23:53.479 }, 00:23:53.479 "claimed": false, 00:23:53.479 "zoned": false, 00:23:53.479 "supported_io_types": { 00:23:53.479 "read": true, 00:23:53.479 "write": true, 00:23:53.479 "unmap": true, 00:23:53.479 "flush": false, 00:23:53.479 "reset": true, 00:23:53.479 "nvme_admin": false, 00:23:53.479 "nvme_io": false, 00:23:53.479 "nvme_io_md": false, 00:23:53.479 "write_zeroes": true, 00:23:53.479 "zcopy": false, 00:23:53.479 "get_zone_info": false, 00:23:53.479 "zone_management": false, 00:23:53.479 "zone_append": false, 00:23:53.479 "compare": false, 00:23:53.479 "compare_and_write": false, 00:23:53.479 "abort": false, 00:23:53.479 "seek_hole": true, 00:23:53.479 "seek_data": true, 00:23:53.479 "copy": false, 00:23:53.479 "nvme_iov_md": false 00:23:53.479 }, 00:23:53.479 "driver_specific": { 00:23:53.479 "lvol": { 00:23:53.479 "lvol_store_uuid": "ae23ae9c-c3cd-49de-b25c-d81dde85780d", 00:23:53.479 "base_bdev": "nvme0n1", 00:23:53.479 "thin_provision": true, 00:23:53.479 "num_allocated_clusters": 0, 00:23:53.479 "snapshot": false, 00:23:53.479 "clone": false, 00:23:53.479 "esnap_clone": false 00:23:53.479 } 00:23:53.479 } 00:23:53.479 } 00:23:53.479 ]' 00:23:53.479 15:16:54 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:53.479 15:16:54 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:23:53.479 15:16:54 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:53.479 15:16:54 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:53.479 15:16:54 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:53.479 15:16:54 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:23:53.479 15:16:54 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:23:53.479 15:16:54 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:53.738 15:16:54 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:23:53.738 15:16:54 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 0730f254-dc8b-4563-825e-99cb77aa1e08 00:23:53.738 15:16:54 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=0730f254-dc8b-4563-825e-99cb77aa1e08 00:23:53.738 15:16:54 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:53.738 15:16:54 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:23:53.738 15:16:54 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:23:53.738 15:16:54 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0730f254-dc8b-4563-825e-99cb77aa1e08 00:23:53.998 15:16:54 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:53.998 { 00:23:53.998 "name": "0730f254-dc8b-4563-825e-99cb77aa1e08", 00:23:53.998 "aliases": [ 00:23:53.998 "lvs/nvme0n1p0" 00:23:53.998 ], 00:23:53.998 "product_name": "Logical Volume", 00:23:53.998 "block_size": 4096, 00:23:53.998 "num_blocks": 26476544, 00:23:53.998 "uuid": "0730f254-dc8b-4563-825e-99cb77aa1e08", 00:23:53.998 "assigned_rate_limits": { 00:23:53.998 "rw_ios_per_sec": 0, 00:23:53.998 "rw_mbytes_per_sec": 0, 00:23:53.998 "r_mbytes_per_sec": 0, 00:23:53.998 "w_mbytes_per_sec": 0 00:23:53.998 }, 00:23:53.998 "claimed": false, 00:23:53.998 "zoned": false, 00:23:53.998 "supported_io_types": { 00:23:53.998 "read": true, 00:23:53.998 "write": true, 00:23:53.998 "unmap": true, 00:23:53.998 "flush": false, 00:23:53.998 "reset": true, 00:23:53.998 "nvme_admin": false, 00:23:53.998 "nvme_io": false, 00:23:53.998 "nvme_io_md": false, 00:23:53.998 "write_zeroes": true, 00:23:53.998 "zcopy": false, 00:23:53.998 "get_zone_info": false, 00:23:53.998 "zone_management": false, 00:23:53.998 "zone_append": false, 00:23:53.998 "compare": false, 00:23:53.998 "compare_and_write": false, 00:23:53.998 "abort": false, 00:23:53.998 "seek_hole": true, 00:23:53.998 "seek_data": true, 00:23:53.998 "copy": false, 00:23:53.998 "nvme_iov_md": false 00:23:53.998 }, 00:23:53.998 "driver_specific": { 00:23:53.998 "lvol": { 00:23:53.998 "lvol_store_uuid": "ae23ae9c-c3cd-49de-b25c-d81dde85780d", 00:23:53.998 "base_bdev": "nvme0n1", 00:23:53.998 "thin_provision": true, 00:23:53.998 "num_allocated_clusters": 0, 00:23:53.998 "snapshot": false, 00:23:53.998 "clone": false, 00:23:53.998 "esnap_clone": false 00:23:53.998 } 00:23:53.998 } 00:23:53.998 } 00:23:53.998 ]' 00:23:53.998 15:16:54 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:53.998 15:16:54 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:23:53.998 15:16:54 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:54.259 15:16:54 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:54.259 15:16:54 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:54.259 15:16:54 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:23:54.259 15:16:54 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:23:54.259 15:16:54 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 0730f254-dc8b-4563-825e-99cb77aa1e08 --l2p_dram_limit 10' 00:23:54.259 15:16:54 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:23:54.259 15:16:54 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:23:54.259 15:16:54 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:23:54.259 15:16:54 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:23:54.259 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:23:54.259 15:16:54 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 0730f254-dc8b-4563-825e-99cb77aa1e08 --l2p_dram_limit 10 -c nvc0n1p0 00:23:54.259 [2024-11-20 15:16:55.038908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.259 [2024-11-20 15:16:55.039219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:54.259 [2024-11-20 15:16:55.039329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:54.259 [2024-11-20 15:16:55.039370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.259 [2024-11-20 15:16:55.039514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.259 [2024-11-20 15:16:55.039618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:54.259 [2024-11-20 15:16:55.039662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:23:54.259 [2024-11-20 15:16:55.039695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.259 [2024-11-20 15:16:55.039832] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:54.259 [2024-11-20 15:16:55.041031] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:54.259 [2024-11-20 15:16:55.041190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.259 [2024-11-20 15:16:55.041270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:54.259 [2024-11-20 15:16:55.041311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.368 ms 00:23:54.259 [2024-11-20 15:16:55.041343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.259 [2024-11-20 15:16:55.041649] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 7d8770c9-24dd-42ab-a5d0-936f5b553fe3 00:23:54.259 [2024-11-20 15:16:55.044103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.259 [2024-11-20 15:16:55.044236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:54.259 [2024-11-20 15:16:55.044309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:23:54.259 [2024-11-20 15:16:55.044352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.259 [2024-11-20 15:16:55.058917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.259 [2024-11-20 15:16:55.059191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:54.259 [2024-11-20 15:16:55.059351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.382 ms 00:23:54.259 [2024-11-20 15:16:55.059395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.259 [2024-11-20 15:16:55.059565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.259 [2024-11-20 15:16:55.059608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:54.259 [2024-11-20 15:16:55.059706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:23:54.259 [2024-11-20 15:16:55.059786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.259 [2024-11-20 15:16:55.059923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.259 [2024-11-20 15:16:55.060066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:54.259 [2024-11-20 15:16:55.060100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:23:54.259 [2024-11-20 15:16:55.060120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.259 [2024-11-20 15:16:55.060154] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:54.259 [2024-11-20 15:16:55.067012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.259 [2024-11-20 15:16:55.067062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:54.259 [2024-11-20 15:16:55.067079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.877 ms 00:23:54.259 [2024-11-20 15:16:55.067090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.259 [2024-11-20 15:16:55.067138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.259 [2024-11-20 15:16:55.067149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:54.259 [2024-11-20 15:16:55.067164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:23:54.259 [2024-11-20 15:16:55.067175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.259 [2024-11-20 15:16:55.067227] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:54.259 [2024-11-20 15:16:55.067373] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:54.259 [2024-11-20 15:16:55.067396] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:54.259 [2024-11-20 15:16:55.067412] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:54.259 [2024-11-20 15:16:55.067429] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:54.259 [2024-11-20 15:16:55.067442] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:54.259 [2024-11-20 15:16:55.067457] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:54.259 [2024-11-20 15:16:55.067468] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:54.259 [2024-11-20 15:16:55.067485] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:54.259 [2024-11-20 15:16:55.067496] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:54.259 [2024-11-20 15:16:55.067511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.259 [2024-11-20 15:16:55.067522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:54.259 [2024-11-20 15:16:55.067536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:23:54.259 [2024-11-20 15:16:55.067563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.259 [2024-11-20 15:16:55.067644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.259 [2024-11-20 15:16:55.067655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:54.259 [2024-11-20 15:16:55.067670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:23:54.259 [2024-11-20 15:16:55.067680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.259 [2024-11-20 15:16:55.067804] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:54.259 [2024-11-20 15:16:55.067819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:54.259 [2024-11-20 15:16:55.067834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:54.259 [2024-11-20 15:16:55.067845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:54.259 [2024-11-20 15:16:55.067858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:54.259 [2024-11-20 15:16:55.067868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:54.259 [2024-11-20 15:16:55.067880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:54.259 [2024-11-20 15:16:55.067891] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:54.259 [2024-11-20 15:16:55.067904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:54.259 [2024-11-20 15:16:55.067913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:54.259 [2024-11-20 15:16:55.067926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:54.259 [2024-11-20 15:16:55.067937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:54.260 [2024-11-20 15:16:55.067955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:54.260 [2024-11-20 15:16:55.067964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:54.260 [2024-11-20 15:16:55.067978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:54.260 [2024-11-20 15:16:55.067988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:54.260 [2024-11-20 15:16:55.068005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:54.260 [2024-11-20 15:16:55.068015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:54.260 [2024-11-20 15:16:55.068027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:54.260 [2024-11-20 15:16:55.068038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:54.260 [2024-11-20 15:16:55.068050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:54.260 [2024-11-20 15:16:55.068060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:54.260 [2024-11-20 15:16:55.068072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:54.260 [2024-11-20 15:16:55.068081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:54.260 [2024-11-20 15:16:55.068093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:54.260 [2024-11-20 15:16:55.068102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:54.260 [2024-11-20 15:16:55.068115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:54.260 [2024-11-20 15:16:55.068124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:54.260 [2024-11-20 15:16:55.068136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:54.260 [2024-11-20 15:16:55.068145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:54.260 [2024-11-20 15:16:55.068157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:54.260 [2024-11-20 15:16:55.068166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:54.260 [2024-11-20 15:16:55.068182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:54.260 [2024-11-20 15:16:55.068192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:54.260 [2024-11-20 15:16:55.068203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:54.260 [2024-11-20 15:16:55.068213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:54.260 [2024-11-20 15:16:55.068224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:54.260 [2024-11-20 15:16:55.068233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:54.260 [2024-11-20 15:16:55.068246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:54.260 [2024-11-20 15:16:55.068255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:54.260 [2024-11-20 15:16:55.068267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:54.260 [2024-11-20 15:16:55.068276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:54.260 [2024-11-20 15:16:55.068288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:54.260 [2024-11-20 15:16:55.068297] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:54.260 [2024-11-20 15:16:55.068312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:54.260 [2024-11-20 15:16:55.068321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:54.260 [2024-11-20 15:16:55.068336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:54.260 [2024-11-20 15:16:55.068347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:54.260 [2024-11-20 15:16:55.068362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:54.260 [2024-11-20 15:16:55.068372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:54.260 [2024-11-20 15:16:55.068385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:54.260 [2024-11-20 15:16:55.068395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:54.260 [2024-11-20 15:16:55.068408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:54.260 [2024-11-20 15:16:55.068424] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:54.260 [2024-11-20 15:16:55.068441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:54.260 [2024-11-20 15:16:55.068456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:54.260 [2024-11-20 15:16:55.068470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:54.260 [2024-11-20 15:16:55.068481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:54.260 [2024-11-20 15:16:55.068494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:54.260 [2024-11-20 15:16:55.068505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:54.260 [2024-11-20 15:16:55.068518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:54.260 [2024-11-20 15:16:55.068529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:54.260 [2024-11-20 15:16:55.068543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:54.260 [2024-11-20 15:16:55.068553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:54.260 [2024-11-20 15:16:55.068570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:54.260 [2024-11-20 15:16:55.068580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:54.260 [2024-11-20 15:16:55.068597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:54.260 [2024-11-20 15:16:55.068609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:54.260 [2024-11-20 15:16:55.068623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:54.260 [2024-11-20 15:16:55.068633] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:54.260 [2024-11-20 15:16:55.068647] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:54.260 [2024-11-20 15:16:55.068659] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:54.260 [2024-11-20 15:16:55.068672] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:54.260 [2024-11-20 15:16:55.068683] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:54.260 [2024-11-20 15:16:55.068697] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:54.260 [2024-11-20 15:16:55.068708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.260 [2024-11-20 15:16:55.068731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:54.260 [2024-11-20 15:16:55.068742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.966 ms 00:23:54.260 [2024-11-20 15:16:55.068756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.260 [2024-11-20 15:16:55.068805] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:54.260 [2024-11-20 15:16:55.068825] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:57.547 [2024-11-20 15:16:58.057726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.547 [2024-11-20 15:16:58.058044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:57.547 [2024-11-20 15:16:58.058077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2993.765 ms 00:23:57.547 [2024-11-20 15:16:58.058093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.547 [2024-11-20 15:16:58.106423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.547 [2024-11-20 15:16:58.106496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:57.547 [2024-11-20 15:16:58.106515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.047 ms 00:23:57.547 [2024-11-20 15:16:58.106531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.547 [2024-11-20 15:16:58.106708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.547 [2024-11-20 15:16:58.106742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:57.547 [2024-11-20 15:16:58.106755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:23:57.547 [2024-11-20 15:16:58.106779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.547 [2024-11-20 15:16:58.162459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.547 [2024-11-20 15:16:58.162548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:57.547 [2024-11-20 15:16:58.162566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.698 ms 00:23:57.547 [2024-11-20 15:16:58.162581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.547 [2024-11-20 15:16:58.162636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.547 [2024-11-20 15:16:58.162656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:57.547 [2024-11-20 15:16:58.162669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:57.547 [2024-11-20 15:16:58.162683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.547 [2024-11-20 15:16:58.163492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.547 [2024-11-20 15:16:58.163518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:57.547 [2024-11-20 15:16:58.163530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.719 ms 00:23:57.547 [2024-11-20 15:16:58.163544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.547 [2024-11-20 15:16:58.163665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.547 [2024-11-20 15:16:58.163680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:57.547 [2024-11-20 15:16:58.163694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:23:57.547 [2024-11-20 15:16:58.163711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.547 [2024-11-20 15:16:58.190237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.547 [2024-11-20 15:16:58.190301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:57.547 [2024-11-20 15:16:58.190318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.533 ms 00:23:57.547 [2024-11-20 15:16:58.190332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.547 [2024-11-20 15:16:58.219417] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:57.547 [2024-11-20 15:16:58.224635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.547 [2024-11-20 15:16:58.224676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:57.547 [2024-11-20 15:16:58.224696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.217 ms 00:23:57.547 [2024-11-20 15:16:58.224708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.547 [2024-11-20 15:16:58.306893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.547 [2024-11-20 15:16:58.306982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:57.547 [2024-11-20 15:16:58.307005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.227 ms 00:23:57.547 [2024-11-20 15:16:58.307018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.547 [2024-11-20 15:16:58.307251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.547 [2024-11-20 15:16:58.307271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:57.547 [2024-11-20 15:16:58.307290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.171 ms 00:23:57.547 [2024-11-20 15:16:58.307301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.547 [2024-11-20 15:16:58.346320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.547 [2024-11-20 15:16:58.346407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:57.547 [2024-11-20 15:16:58.346430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.012 ms 00:23:57.547 [2024-11-20 15:16:58.346443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.805 [2024-11-20 15:16:58.385035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.805 [2024-11-20 15:16:58.385108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:57.805 [2024-11-20 15:16:58.385138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.582 ms 00:23:57.805 [2024-11-20 15:16:58.385155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.805 [2024-11-20 15:16:58.386042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.805 [2024-11-20 15:16:58.386078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:57.805 [2024-11-20 15:16:58.386096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.809 ms 00:23:57.805 [2024-11-20 15:16:58.386113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.805 [2024-11-20 15:16:58.497954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.805 [2024-11-20 15:16:58.498236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:57.805 [2024-11-20 15:16:58.498279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 111.906 ms 00:23:57.805 [2024-11-20 15:16:58.498292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.805 [2024-11-20 15:16:58.541825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.805 [2024-11-20 15:16:58.541905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:57.805 [2024-11-20 15:16:58.541928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.448 ms 00:23:57.805 [2024-11-20 15:16:58.541940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.805 [2024-11-20 15:16:58.583429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.805 [2024-11-20 15:16:58.583510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:57.805 [2024-11-20 15:16:58.583534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.482 ms 00:23:57.805 [2024-11-20 15:16:58.583546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.805 [2024-11-20 15:16:58.625741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.805 [2024-11-20 15:16:58.625834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:57.805 [2024-11-20 15:16:58.625858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.186 ms 00:23:57.805 [2024-11-20 15:16:58.625871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.805 [2024-11-20 15:16:58.625960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.805 [2024-11-20 15:16:58.625973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:57.805 [2024-11-20 15:16:58.625994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:57.805 [2024-11-20 15:16:58.626005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.805 [2024-11-20 15:16:58.626181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.805 [2024-11-20 15:16:58.626195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:57.805 [2024-11-20 15:16:58.626214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:23:57.805 [2024-11-20 15:16:58.626226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.805 [2024-11-20 15:16:58.627752] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3594.103 ms, result 0 00:23:57.805 { 00:23:57.805 "name": "ftl0", 00:23:57.805 "uuid": "7d8770c9-24dd-42ab-a5d0-936f5b553fe3" 00:23:57.805 } 00:23:58.062 15:16:58 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:23:58.062 15:16:58 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:58.320 15:16:58 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:23:58.320 15:16:58 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:58.320 [2024-11-20 15:16:59.106092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.320 [2024-11-20 15:16:59.106183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:58.320 [2024-11-20 15:16:59.106203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:58.320 [2024-11-20 15:16:59.106229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.320 [2024-11-20 15:16:59.106263] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:58.320 [2024-11-20 15:16:59.111223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.320 [2024-11-20 15:16:59.111264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:58.320 [2024-11-20 15:16:59.111283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.941 ms 00:23:58.320 [2024-11-20 15:16:59.111294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.320 [2024-11-20 15:16:59.111574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.320 [2024-11-20 15:16:59.111593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:58.320 [2024-11-20 15:16:59.111607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.240 ms 00:23:58.320 [2024-11-20 15:16:59.111618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.320 [2024-11-20 15:16:59.114164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.320 [2024-11-20 15:16:59.114182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:58.320 [2024-11-20 15:16:59.114197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.529 ms 00:23:58.320 [2024-11-20 15:16:59.114208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.320 [2024-11-20 15:16:59.119251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.320 [2024-11-20 15:16:59.119290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:58.320 [2024-11-20 15:16:59.119310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.023 ms 00:23:58.320 [2024-11-20 15:16:59.119320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.581 [2024-11-20 15:16:59.157493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.581 [2024-11-20 15:16:59.157792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:58.581 [2024-11-20 15:16:59.157831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.140 ms 00:23:58.581 [2024-11-20 15:16:59.157843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.582 [2024-11-20 15:16:59.181759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.582 [2024-11-20 15:16:59.181828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:58.582 [2024-11-20 15:16:59.181851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.862 ms 00:23:58.582 [2024-11-20 15:16:59.181863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.582 [2024-11-20 15:16:59.182073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.582 [2024-11-20 15:16:59.182090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:58.582 [2024-11-20 15:16:59.182106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:23:58.582 [2024-11-20 15:16:59.182117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.582 [2024-11-20 15:16:59.220774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.582 [2024-11-20 15:16:59.221022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:58.582 [2024-11-20 15:16:59.221058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.686 ms 00:23:58.582 [2024-11-20 15:16:59.221069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.582 [2024-11-20 15:16:59.258564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.582 [2024-11-20 15:16:59.258619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:58.582 [2024-11-20 15:16:59.258640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.492 ms 00:23:58.582 [2024-11-20 15:16:59.258652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.582 [2024-11-20 15:16:59.296685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.582 [2024-11-20 15:16:59.296758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:58.582 [2024-11-20 15:16:59.296781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.022 ms 00:23:58.582 [2024-11-20 15:16:59.296792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.582 [2024-11-20 15:16:59.333535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.582 [2024-11-20 15:16:59.333603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:58.582 [2024-11-20 15:16:59.333626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.657 ms 00:23:58.582 [2024-11-20 15:16:59.333637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.582 [2024-11-20 15:16:59.333692] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:58.582 [2024-11-20 15:16:59.333713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:58.582 [2024-11-20 15:16:59.333745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:58.582 [2024-11-20 15:16:59.333758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:58.582 [2024-11-20 15:16:59.333773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:58.582 [2024-11-20 15:16:59.333785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:58.582 [2024-11-20 15:16:59.333800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:58.582 [2024-11-20 15:16:59.333812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:58.582 [2024-11-20 15:16:59.333832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:58.582 [2024-11-20 15:16:59.333843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:58.582 [2024-11-20 15:16:59.333858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:58.582 [2024-11-20 15:16:59.333870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:58.582 [2024-11-20 15:16:59.333884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:58.582 [2024-11-20 15:16:59.333896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:58.582 [2024-11-20 15:16:59.333910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:58.582 [2024-11-20 15:16:59.333921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:58.582 [2024-11-20 15:16:59.333936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:58.582 [2024-11-20 15:16:59.333947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:58.582 [2024-11-20 15:16:59.333964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:58.582 [2024-11-20 15:16:59.333975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:58.582 [2024-11-20 15:16:59.333989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:58.582 [2024-11-20 15:16:59.334000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:58.582 [2024-11-20 15:16:59.334017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:58.582 [2024-11-20 15:16:59.334027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:58.582 [2024-11-20 15:16:59.334045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:58.582 [2024-11-20 15:16:59.334056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:58.582 [2024-11-20 15:16:59.334071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:58.582 [2024-11-20 15:16:59.334082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:58.582 [2024-11-20 15:16:59.334096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:58.582 [2024-11-20 15:16:59.334107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:58.582 [2024-11-20 15:16:59.334122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:58.582 [2024-11-20 15:16:59.334134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:58.582 [2024-11-20 15:16:59.334149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:58.582 [2024-11-20 15:16:59.334160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:58.582 [2024-11-20 15:16:59.334175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:58.582 [2024-11-20 15:16:59.334187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.334201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.334211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.334226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.334237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.334254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.334265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.334278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.334289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.334303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.334314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.334327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.334338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.334354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.334365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.334380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.334391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.334405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.334416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.334429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.334440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.334456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.334469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.334496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.334507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.334521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.334532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.334547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.334559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.334574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.334586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.334600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.334611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.334625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.334636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.334650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.334661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.334681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.334692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.334706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.334726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.334741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.334752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.334767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.334778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.334793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.334804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.334817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.334828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.334843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.334854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.334868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.334879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.334896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.334907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.334922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.334932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.334946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.334957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.334976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.334988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.335002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.335014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.335028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.335039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:58.583 [2024-11-20 15:16:59.335055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:58.584 [2024-11-20 15:16:59.335074] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:58.584 [2024-11-20 15:16:59.335093] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7d8770c9-24dd-42ab-a5d0-936f5b553fe3 00:23:58.584 [2024-11-20 15:16:59.335105] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:58.584 [2024-11-20 15:16:59.335124] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:58.584 [2024-11-20 15:16:59.335134] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:58.584 [2024-11-20 15:16:59.335153] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:58.584 [2024-11-20 15:16:59.335163] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:58.584 [2024-11-20 15:16:59.335177] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:58.584 [2024-11-20 15:16:59.335188] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:58.584 [2024-11-20 15:16:59.335201] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:58.584 [2024-11-20 15:16:59.335210] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:58.584 [2024-11-20 15:16:59.335224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.584 [2024-11-20 15:16:59.335235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:58.584 [2024-11-20 15:16:59.335249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.536 ms 00:23:58.584 [2024-11-20 15:16:59.335260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.584 [2024-11-20 15:16:59.356626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.584 [2024-11-20 15:16:59.356684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:58.584 [2024-11-20 15:16:59.356721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.326 ms 00:23:58.584 [2024-11-20 15:16:59.356746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.584 [2024-11-20 15:16:59.357434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.584 [2024-11-20 15:16:59.357458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:58.584 [2024-11-20 15:16:59.357478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.639 ms 00:23:58.584 [2024-11-20 15:16:59.357489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.846 [2024-11-20 15:16:59.427353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:58.846 [2024-11-20 15:16:59.427433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:58.846 [2024-11-20 15:16:59.427455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:58.846 [2024-11-20 15:16:59.427466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.846 [2024-11-20 15:16:59.427605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:58.846 [2024-11-20 15:16:59.427618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:58.846 [2024-11-20 15:16:59.427637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:58.846 [2024-11-20 15:16:59.427647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.846 [2024-11-20 15:16:59.427820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:58.846 [2024-11-20 15:16:59.427837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:58.846 [2024-11-20 15:16:59.427851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:58.846 [2024-11-20 15:16:59.427862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.846 [2024-11-20 15:16:59.427894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:58.846 [2024-11-20 15:16:59.427906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:58.846 [2024-11-20 15:16:59.427920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:58.846 [2024-11-20 15:16:59.427930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.846 [2024-11-20 15:16:59.564547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:58.846 [2024-11-20 15:16:59.564635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:58.846 [2024-11-20 15:16:59.564657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:58.846 [2024-11-20 15:16:59.564669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.846 [2024-11-20 15:16:59.673844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:58.846 [2024-11-20 15:16:59.673920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:58.846 [2024-11-20 15:16:59.673940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:58.846 [2024-11-20 15:16:59.673957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.846 [2024-11-20 15:16:59.674118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:58.846 [2024-11-20 15:16:59.674131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:58.846 [2024-11-20 15:16:59.674146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:58.846 [2024-11-20 15:16:59.674156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.846 [2024-11-20 15:16:59.674229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:58.847 [2024-11-20 15:16:59.674242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:58.847 [2024-11-20 15:16:59.674256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:58.847 [2024-11-20 15:16:59.674266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.847 [2024-11-20 15:16:59.674400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:58.847 [2024-11-20 15:16:59.674414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:58.847 [2024-11-20 15:16:59.674428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:58.847 [2024-11-20 15:16:59.674438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.847 [2024-11-20 15:16:59.674483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:58.847 [2024-11-20 15:16:59.674496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:58.847 [2024-11-20 15:16:59.674511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:58.847 [2024-11-20 15:16:59.674521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.847 [2024-11-20 15:16:59.674575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:58.847 [2024-11-20 15:16:59.674592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:58.847 [2024-11-20 15:16:59.674606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:58.847 [2024-11-20 15:16:59.674617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.847 [2024-11-20 15:16:59.674677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:58.847 [2024-11-20 15:16:59.674690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:58.847 [2024-11-20 15:16:59.674704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:58.847 [2024-11-20 15:16:59.674714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.847 [2024-11-20 15:16:59.674903] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 569.689 ms, result 0 00:23:59.106 true 00:23:59.106 15:16:59 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 79337 00:23:59.106 15:16:59 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79337 ']' 00:23:59.106 15:16:59 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79337 00:23:59.106 15:16:59 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:23:59.106 15:16:59 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:59.106 15:16:59 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79337 00:23:59.106 15:16:59 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:59.106 15:16:59 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:59.106 killing process with pid 79337 00:23:59.106 15:16:59 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79337' 00:23:59.106 15:16:59 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 79337 00:23:59.106 15:16:59 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 79337 00:24:04.438 15:17:05 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:24:08.626 262144+0 records in 00:24:08.626 262144+0 records out 00:24:08.626 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.28908 s, 250 MB/s 00:24:08.626 15:17:09 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:10.587 15:17:11 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:10.587 [2024-11-20 15:17:11.293965] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:24:10.588 [2024-11-20 15:17:11.294115] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79579 ] 00:24:10.846 [2024-11-20 15:17:11.480217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.846 [2024-11-20 15:17:11.626171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:11.415 [2024-11-20 15:17:12.061063] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:11.415 [2024-11-20 15:17:12.061149] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:11.415 [2024-11-20 15:17:12.232850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.415 [2024-11-20 15:17:12.232921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:11.415 [2024-11-20 15:17:12.232944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:11.415 [2024-11-20 15:17:12.232956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.415 [2024-11-20 15:17:12.233032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.415 [2024-11-20 15:17:12.233045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:11.415 [2024-11-20 15:17:12.233061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:24:11.415 [2024-11-20 15:17:12.233072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.415 [2024-11-20 15:17:12.233096] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:11.415 [2024-11-20 15:17:12.234135] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:11.415 [2024-11-20 15:17:12.234168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.415 [2024-11-20 15:17:12.234181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:11.415 [2024-11-20 15:17:12.234192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.079 ms 00:24:11.415 [2024-11-20 15:17:12.234203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.415 [2024-11-20 15:17:12.236686] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:11.675 [2024-11-20 15:17:12.258622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.675 [2024-11-20 15:17:12.258692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:11.675 [2024-11-20 15:17:12.258712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.970 ms 00:24:11.675 [2024-11-20 15:17:12.258731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.675 [2024-11-20 15:17:12.258878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.675 [2024-11-20 15:17:12.258893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:11.675 [2024-11-20 15:17:12.258906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:24:11.675 [2024-11-20 15:17:12.258916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.675 [2024-11-20 15:17:12.272500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.675 [2024-11-20 15:17:12.272550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:11.675 [2024-11-20 15:17:12.272566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.496 ms 00:24:11.675 [2024-11-20 15:17:12.272594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.675 [2024-11-20 15:17:12.272739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.675 [2024-11-20 15:17:12.272756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:11.675 [2024-11-20 15:17:12.272768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:24:11.675 [2024-11-20 15:17:12.272779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.675 [2024-11-20 15:17:12.272871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.675 [2024-11-20 15:17:12.272884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:11.675 [2024-11-20 15:17:12.272896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:11.675 [2024-11-20 15:17:12.272907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.675 [2024-11-20 15:17:12.272948] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:11.675 [2024-11-20 15:17:12.278643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.675 [2024-11-20 15:17:12.278676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:11.675 [2024-11-20 15:17:12.278689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.723 ms 00:24:11.675 [2024-11-20 15:17:12.278708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.675 [2024-11-20 15:17:12.278756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.675 [2024-11-20 15:17:12.278769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:11.675 [2024-11-20 15:17:12.278780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:11.675 [2024-11-20 15:17:12.278791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.675 [2024-11-20 15:17:12.278835] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:11.675 [2024-11-20 15:17:12.278865] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:11.675 [2024-11-20 15:17:12.278905] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:11.676 [2024-11-20 15:17:12.278931] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:11.676 [2024-11-20 15:17:12.279041] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:11.676 [2024-11-20 15:17:12.279063] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:11.676 [2024-11-20 15:17:12.279077] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:11.676 [2024-11-20 15:17:12.279091] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:11.676 [2024-11-20 15:17:12.279103] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:11.676 [2024-11-20 15:17:12.279115] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:11.676 [2024-11-20 15:17:12.279125] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:11.676 [2024-11-20 15:17:12.279136] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:11.676 [2024-11-20 15:17:12.279154] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:11.676 [2024-11-20 15:17:12.279166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.676 [2024-11-20 15:17:12.279176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:11.676 [2024-11-20 15:17:12.279187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.336 ms 00:24:11.676 [2024-11-20 15:17:12.279197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.676 [2024-11-20 15:17:12.279274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.676 [2024-11-20 15:17:12.279285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:11.676 [2024-11-20 15:17:12.279295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:24:11.676 [2024-11-20 15:17:12.279305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.676 [2024-11-20 15:17:12.279420] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:11.676 [2024-11-20 15:17:12.279435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:11.676 [2024-11-20 15:17:12.279446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:11.676 [2024-11-20 15:17:12.279456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:11.676 [2024-11-20 15:17:12.279474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:11.676 [2024-11-20 15:17:12.279484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:11.676 [2024-11-20 15:17:12.279494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:11.676 [2024-11-20 15:17:12.279504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:11.676 [2024-11-20 15:17:12.279514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:11.676 [2024-11-20 15:17:12.279523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:11.676 [2024-11-20 15:17:12.279532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:11.676 [2024-11-20 15:17:12.279542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:11.676 [2024-11-20 15:17:12.279552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:11.676 [2024-11-20 15:17:12.279561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:11.676 [2024-11-20 15:17:12.279571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:11.676 [2024-11-20 15:17:12.279595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:11.676 [2024-11-20 15:17:12.279604] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:11.676 [2024-11-20 15:17:12.279614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:11.676 [2024-11-20 15:17:12.279623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:11.676 [2024-11-20 15:17:12.279633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:11.676 [2024-11-20 15:17:12.279642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:11.676 [2024-11-20 15:17:12.279652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:11.676 [2024-11-20 15:17:12.279661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:11.676 [2024-11-20 15:17:12.279670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:11.676 [2024-11-20 15:17:12.279680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:11.676 [2024-11-20 15:17:12.279689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:11.676 [2024-11-20 15:17:12.279698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:11.676 [2024-11-20 15:17:12.279707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:11.676 [2024-11-20 15:17:12.279728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:11.676 [2024-11-20 15:17:12.279739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:11.676 [2024-11-20 15:17:12.279748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:11.676 [2024-11-20 15:17:12.279758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:11.676 [2024-11-20 15:17:12.279767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:11.676 [2024-11-20 15:17:12.279777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:11.676 [2024-11-20 15:17:12.279786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:11.676 [2024-11-20 15:17:12.279796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:11.676 [2024-11-20 15:17:12.279808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:11.676 [2024-11-20 15:17:12.279817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:11.676 [2024-11-20 15:17:12.279826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:11.676 [2024-11-20 15:17:12.279835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:11.676 [2024-11-20 15:17:12.279844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:11.676 [2024-11-20 15:17:12.279853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:11.676 [2024-11-20 15:17:12.279862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:11.676 [2024-11-20 15:17:12.279871] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:11.676 [2024-11-20 15:17:12.279881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:11.676 [2024-11-20 15:17:12.279890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:11.676 [2024-11-20 15:17:12.279901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:11.676 [2024-11-20 15:17:12.279911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:11.676 [2024-11-20 15:17:12.279921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:11.676 [2024-11-20 15:17:12.279930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:11.676 [2024-11-20 15:17:12.279940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:11.676 [2024-11-20 15:17:12.279949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:11.676 [2024-11-20 15:17:12.279958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:11.676 [2024-11-20 15:17:12.279969] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:11.676 [2024-11-20 15:17:12.279981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:11.676 [2024-11-20 15:17:12.279993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:11.676 [2024-11-20 15:17:12.280003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:11.676 [2024-11-20 15:17:12.280013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:11.676 [2024-11-20 15:17:12.280024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:11.676 [2024-11-20 15:17:12.280034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:11.676 [2024-11-20 15:17:12.280044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:11.676 [2024-11-20 15:17:12.280055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:11.676 [2024-11-20 15:17:12.280065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:11.676 [2024-11-20 15:17:12.280076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:11.676 [2024-11-20 15:17:12.280086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:11.676 [2024-11-20 15:17:12.280096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:11.676 [2024-11-20 15:17:12.280106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:11.676 [2024-11-20 15:17:12.280116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:11.676 [2024-11-20 15:17:12.280128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:11.676 [2024-11-20 15:17:12.280139] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:11.676 [2024-11-20 15:17:12.280158] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:11.676 [2024-11-20 15:17:12.280170] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:11.676 [2024-11-20 15:17:12.280180] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:11.676 [2024-11-20 15:17:12.280191] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:11.676 [2024-11-20 15:17:12.280202] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:11.676 [2024-11-20 15:17:12.280213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.676 [2024-11-20 15:17:12.280224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:11.677 [2024-11-20 15:17:12.280234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.850 ms 00:24:11.677 [2024-11-20 15:17:12.280244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.677 [2024-11-20 15:17:12.330494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.677 [2024-11-20 15:17:12.330562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:11.677 [2024-11-20 15:17:12.330581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.272 ms 00:24:11.677 [2024-11-20 15:17:12.330592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.677 [2024-11-20 15:17:12.330729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.677 [2024-11-20 15:17:12.330742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:11.677 [2024-11-20 15:17:12.330754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:24:11.677 [2024-11-20 15:17:12.330766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.677 [2024-11-20 15:17:12.396787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.677 [2024-11-20 15:17:12.396855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:11.677 [2024-11-20 15:17:12.396872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.994 ms 00:24:11.677 [2024-11-20 15:17:12.396884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.677 [2024-11-20 15:17:12.396973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.677 [2024-11-20 15:17:12.396986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:11.677 [2024-11-20 15:17:12.397003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:11.677 [2024-11-20 15:17:12.397014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.677 [2024-11-20 15:17:12.397841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.677 [2024-11-20 15:17:12.397865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:11.677 [2024-11-20 15:17:12.397877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.739 ms 00:24:11.677 [2024-11-20 15:17:12.397888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.677 [2024-11-20 15:17:12.398043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.677 [2024-11-20 15:17:12.398057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:11.677 [2024-11-20 15:17:12.398069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:24:11.677 [2024-11-20 15:17:12.398085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.677 [2024-11-20 15:17:12.421748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.677 [2024-11-20 15:17:12.421793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:11.677 [2024-11-20 15:17:12.421813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.675 ms 00:24:11.677 [2024-11-20 15:17:12.421824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.677 [2024-11-20 15:17:12.442373] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:24:11.677 [2024-11-20 15:17:12.442414] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:11.677 [2024-11-20 15:17:12.442432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.677 [2024-11-20 15:17:12.442444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:11.677 [2024-11-20 15:17:12.442456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.485 ms 00:24:11.677 [2024-11-20 15:17:12.442466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.677 [2024-11-20 15:17:12.472998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.677 [2024-11-20 15:17:12.473044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:11.677 [2024-11-20 15:17:12.473059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.534 ms 00:24:11.677 [2024-11-20 15:17:12.473086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.677 [2024-11-20 15:17:12.492128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.677 [2024-11-20 15:17:12.492177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:11.677 [2024-11-20 15:17:12.492207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.024 ms 00:24:11.677 [2024-11-20 15:17:12.492217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.936 [2024-11-20 15:17:12.510295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.936 [2024-11-20 15:17:12.510329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:11.936 [2024-11-20 15:17:12.510341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.065 ms 00:24:11.936 [2024-11-20 15:17:12.510352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.936 [2024-11-20 15:17:12.511197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.936 [2024-11-20 15:17:12.511220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:11.936 [2024-11-20 15:17:12.511233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.738 ms 00:24:11.936 [2024-11-20 15:17:12.511244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.936 [2024-11-20 15:17:12.608118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.936 [2024-11-20 15:17:12.608200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:11.936 [2024-11-20 15:17:12.608219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.002 ms 00:24:11.936 [2024-11-20 15:17:12.608243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.936 [2024-11-20 15:17:12.619879] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:11.936 [2024-11-20 15:17:12.624537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.936 [2024-11-20 15:17:12.624566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:11.936 [2024-11-20 15:17:12.624581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.257 ms 00:24:11.936 [2024-11-20 15:17:12.624594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.936 [2024-11-20 15:17:12.624752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.936 [2024-11-20 15:17:12.624768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:11.937 [2024-11-20 15:17:12.624781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:11.937 [2024-11-20 15:17:12.624791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.937 [2024-11-20 15:17:12.624893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.937 [2024-11-20 15:17:12.624906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:11.937 [2024-11-20 15:17:12.624918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:24:11.937 [2024-11-20 15:17:12.624929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.937 [2024-11-20 15:17:12.624956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.937 [2024-11-20 15:17:12.624967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:11.937 [2024-11-20 15:17:12.624978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:11.937 [2024-11-20 15:17:12.624988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.937 [2024-11-20 15:17:12.625038] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:11.937 [2024-11-20 15:17:12.625052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.937 [2024-11-20 15:17:12.625069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:11.937 [2024-11-20 15:17:12.625080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:11.937 [2024-11-20 15:17:12.625090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.937 [2024-11-20 15:17:12.663179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.937 [2024-11-20 15:17:12.663218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:11.937 [2024-11-20 15:17:12.663234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.129 ms 00:24:11.937 [2024-11-20 15:17:12.663246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.937 [2024-11-20 15:17:12.663343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.937 [2024-11-20 15:17:12.663357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:11.937 [2024-11-20 15:17:12.663368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:24:11.937 [2024-11-20 15:17:12.663378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.937 [2024-11-20 15:17:12.664905] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 432.140 ms, result 0 00:24:12.872  [2024-11-20T15:17:15.085Z] Copying: 25/1024 [MB] (25 MBps) [2024-11-20T15:17:16.019Z] Copying: 51/1024 [MB] (25 MBps) [2024-11-20T15:17:16.954Z] Copying: 77/1024 [MB] (26 MBps) [2024-11-20T15:17:17.890Z] Copying: 103/1024 [MB] (25 MBps) [2024-11-20T15:17:18.827Z] Copying: 128/1024 [MB] (25 MBps) [2024-11-20T15:17:19.760Z] Copying: 153/1024 [MB] (25 MBps) [2024-11-20T15:17:20.695Z] Copying: 179/1024 [MB] (25 MBps) [2024-11-20T15:17:22.089Z] Copying: 204/1024 [MB] (25 MBps) [2024-11-20T15:17:22.672Z] Copying: 230/1024 [MB] (25 MBps) [2024-11-20T15:17:24.047Z] Copying: 256/1024 [MB] (25 MBps) [2024-11-20T15:17:24.984Z] Copying: 281/1024 [MB] (25 MBps) [2024-11-20T15:17:25.961Z] Copying: 305/1024 [MB] (24 MBps) [2024-11-20T15:17:26.896Z] Copying: 331/1024 [MB] (25 MBps) [2024-11-20T15:17:27.832Z] Copying: 356/1024 [MB] (25 MBps) [2024-11-20T15:17:28.769Z] Copying: 384/1024 [MB] (27 MBps) [2024-11-20T15:17:29.705Z] Copying: 409/1024 [MB] (24 MBps) [2024-11-20T15:17:31.082Z] Copying: 434/1024 [MB] (25 MBps) [2024-11-20T15:17:31.648Z] Copying: 459/1024 [MB] (25 MBps) [2024-11-20T15:17:33.025Z] Copying: 485/1024 [MB] (25 MBps) [2024-11-20T15:17:33.959Z] Copying: 510/1024 [MB] (25 MBps) [2024-11-20T15:17:34.896Z] Copying: 537/1024 [MB] (26 MBps) [2024-11-20T15:17:35.833Z] Copying: 564/1024 [MB] (27 MBps) [2024-11-20T15:17:36.770Z] Copying: 591/1024 [MB] (27 MBps) [2024-11-20T15:17:37.706Z] Copying: 617/1024 [MB] (26 MBps) [2024-11-20T15:17:38.645Z] Copying: 645/1024 [MB] (27 MBps) [2024-11-20T15:17:40.021Z] Copying: 671/1024 [MB] (26 MBps) [2024-11-20T15:17:40.955Z] Copying: 699/1024 [MB] (27 MBps) [2024-11-20T15:17:41.892Z] Copying: 726/1024 [MB] (27 MBps) [2024-11-20T15:17:42.829Z] Copying: 753/1024 [MB] (27 MBps) [2024-11-20T15:17:43.766Z] Copying: 782/1024 [MB] (28 MBps) [2024-11-20T15:17:44.704Z] Copying: 810/1024 [MB] (28 MBps) [2024-11-20T15:17:45.641Z] Copying: 837/1024 [MB] (26 MBps) [2024-11-20T15:17:47.053Z] Copying: 864/1024 [MB] (27 MBps) [2024-11-20T15:17:47.622Z] Copying: 892/1024 [MB] (27 MBps) [2024-11-20T15:17:48.998Z] Copying: 918/1024 [MB] (26 MBps) [2024-11-20T15:17:49.933Z] Copying: 945/1024 [MB] (26 MBps) [2024-11-20T15:17:50.868Z] Copying: 971/1024 [MB] (26 MBps) [2024-11-20T15:17:51.813Z] Copying: 999/1024 [MB] (27 MBps) [2024-11-20T15:17:51.813Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-11-20 15:17:51.495824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.977 [2024-11-20 15:17:51.495898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:50.977 [2024-11-20 15:17:51.495917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:50.977 [2024-11-20 15:17:51.495928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.977 [2024-11-20 15:17:51.495950] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:50.977 [2024-11-20 15:17:51.500678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.977 [2024-11-20 15:17:51.500723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:50.977 [2024-11-20 15:17:51.500737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.715 ms 00:24:50.977 [2024-11-20 15:17:51.500755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.977 [2024-11-20 15:17:51.502812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.977 [2024-11-20 15:17:51.502854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:50.977 [2024-11-20 15:17:51.502867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.017 ms 00:24:50.977 [2024-11-20 15:17:51.502878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.977 [2024-11-20 15:17:51.520375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.977 [2024-11-20 15:17:51.520417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:50.977 [2024-11-20 15:17:51.520431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.506 ms 00:24:50.977 [2024-11-20 15:17:51.520442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.977 [2024-11-20 15:17:51.525559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.977 [2024-11-20 15:17:51.525604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:50.977 [2024-11-20 15:17:51.525623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.079 ms 00:24:50.977 [2024-11-20 15:17:51.525641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.977 [2024-11-20 15:17:51.564499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.977 [2024-11-20 15:17:51.564552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:50.977 [2024-11-20 15:17:51.564571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.868 ms 00:24:50.977 [2024-11-20 15:17:51.564581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.977 [2024-11-20 15:17:51.586441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.977 [2024-11-20 15:17:51.586491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:50.977 [2024-11-20 15:17:51.586509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.849 ms 00:24:50.977 [2024-11-20 15:17:51.586521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.977 [2024-11-20 15:17:51.586671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.977 [2024-11-20 15:17:51.586686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:50.977 [2024-11-20 15:17:51.586706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:24:50.977 [2024-11-20 15:17:51.586717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.977 [2024-11-20 15:17:51.624652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.977 [2024-11-20 15:17:51.624701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:50.977 [2024-11-20 15:17:51.624725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.968 ms 00:24:50.977 [2024-11-20 15:17:51.624736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.977 [2024-11-20 15:17:51.661836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.977 [2024-11-20 15:17:51.661892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:50.977 [2024-11-20 15:17:51.661923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.115 ms 00:24:50.977 [2024-11-20 15:17:51.661934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.977 [2024-11-20 15:17:51.699110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.977 [2024-11-20 15:17:51.699170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:50.977 [2024-11-20 15:17:51.699195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.184 ms 00:24:50.977 [2024-11-20 15:17:51.699207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.977 [2024-11-20 15:17:51.735092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.977 [2024-11-20 15:17:51.735150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:50.977 [2024-11-20 15:17:51.735169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.832 ms 00:24:50.977 [2024-11-20 15:17:51.735182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.977 [2024-11-20 15:17:51.735233] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:50.977 [2024-11-20 15:17:51.735256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:50.977 [2024-11-20 15:17:51.735274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:50.977 [2024-11-20 15:17:51.735288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:50.977 [2024-11-20 15:17:51.735303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.735318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.735332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.735347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.735361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.735375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.735389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.735403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.735417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.735430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.735444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.735458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.735472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.735486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.735500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.735514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.735527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.735541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.735556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.735570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.735584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.735599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.735613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.735627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.735641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.735655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.735669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.735686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.735700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.735714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.735737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.735752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.735766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.735781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.735795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.735810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.735824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.735838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.735853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.735867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.735881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.735895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.735909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.735923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.735937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.735951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.735965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.735979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.735993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.736007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.736021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.736034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.736048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.736062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.736077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.736090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.736104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.736118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.736132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.736147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.736162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.736176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.736190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.736204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.736218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.736232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.736247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.736261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.736275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.736290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.736304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.736318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.736332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.736345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.736360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.736374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.736388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.736401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.736415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.736428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.736442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.736456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.736470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.736484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.736497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.736511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.736525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.736539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:50.978 [2024-11-20 15:17:51.736553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:50.979 [2024-11-20 15:17:51.736568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:50.979 [2024-11-20 15:17:51.736583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:50.979 [2024-11-20 15:17:51.736598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:50.979 [2024-11-20 15:17:51.736612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:50.979 [2024-11-20 15:17:51.736627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:50.979 [2024-11-20 15:17:51.736641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:50.979 [2024-11-20 15:17:51.736655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:50.979 [2024-11-20 15:17:51.736670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:50.979 [2024-11-20 15:17:51.736692] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:50.979 [2024-11-20 15:17:51.736713] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7d8770c9-24dd-42ab-a5d0-936f5b553fe3 00:24:50.979 [2024-11-20 15:17:51.736741] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:50.979 [2024-11-20 15:17:51.736754] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:50.979 [2024-11-20 15:17:51.736767] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:50.979 [2024-11-20 15:17:51.736780] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:50.979 [2024-11-20 15:17:51.736793] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:50.979 [2024-11-20 15:17:51.736807] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:50.979 [2024-11-20 15:17:51.736821] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:50.979 [2024-11-20 15:17:51.736846] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:50.979 [2024-11-20 15:17:51.736858] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:50.979 [2024-11-20 15:17:51.736872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.979 [2024-11-20 15:17:51.736885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:50.979 [2024-11-20 15:17:51.736898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.642 ms 00:24:50.979 [2024-11-20 15:17:51.736912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.979 [2024-11-20 15:17:51.758226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.979 [2024-11-20 15:17:51.758271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:50.979 [2024-11-20 15:17:51.758287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.300 ms 00:24:50.979 [2024-11-20 15:17:51.758297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.979 [2024-11-20 15:17:51.758951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.979 [2024-11-20 15:17:51.758973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:50.979 [2024-11-20 15:17:51.758986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.624 ms 00:24:50.979 [2024-11-20 15:17:51.758997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.237 [2024-11-20 15:17:51.813379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:51.237 [2024-11-20 15:17:51.813447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:51.238 [2024-11-20 15:17:51.813464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:51.238 [2024-11-20 15:17:51.813475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.238 [2024-11-20 15:17:51.813559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:51.238 [2024-11-20 15:17:51.813571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:51.238 [2024-11-20 15:17:51.813582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:51.238 [2024-11-20 15:17:51.813593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.238 [2024-11-20 15:17:51.813736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:51.238 [2024-11-20 15:17:51.813753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:51.238 [2024-11-20 15:17:51.813764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:51.238 [2024-11-20 15:17:51.813774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.238 [2024-11-20 15:17:51.813794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:51.238 [2024-11-20 15:17:51.813805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:51.238 [2024-11-20 15:17:51.813815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:51.238 [2024-11-20 15:17:51.813825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.238 [2024-11-20 15:17:51.947731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:51.238 [2024-11-20 15:17:51.947821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:51.238 [2024-11-20 15:17:51.947839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:51.238 [2024-11-20 15:17:51.947851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.238 [2024-11-20 15:17:52.054802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:51.238 [2024-11-20 15:17:52.054886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:51.238 [2024-11-20 15:17:52.054904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:51.238 [2024-11-20 15:17:52.054916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.238 [2024-11-20 15:17:52.055044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:51.238 [2024-11-20 15:17:52.055057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:51.238 [2024-11-20 15:17:52.055068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:51.238 [2024-11-20 15:17:52.055079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.238 [2024-11-20 15:17:52.055127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:51.238 [2024-11-20 15:17:52.055139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:51.238 [2024-11-20 15:17:52.055150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:51.238 [2024-11-20 15:17:52.055160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.238 [2024-11-20 15:17:52.055291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:51.238 [2024-11-20 15:17:52.055310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:51.238 [2024-11-20 15:17:52.055322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:51.238 [2024-11-20 15:17:52.055332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.238 [2024-11-20 15:17:52.055370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:51.238 [2024-11-20 15:17:52.055387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:51.238 [2024-11-20 15:17:52.055403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:51.238 [2024-11-20 15:17:52.055419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.238 [2024-11-20 15:17:52.055475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:51.238 [2024-11-20 15:17:52.055493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:51.238 [2024-11-20 15:17:52.055504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:51.238 [2024-11-20 15:17:52.055514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.238 [2024-11-20 15:17:52.055562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:51.238 [2024-11-20 15:17:52.055574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:51.238 [2024-11-20 15:17:52.055585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:51.238 [2024-11-20 15:17:52.055595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.238 [2024-11-20 15:17:52.055763] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 560.778 ms, result 0 00:24:52.616 00:24:52.616 00:24:52.616 15:17:53 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:24:52.616 [2024-11-20 15:17:53.355674] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:24:52.616 [2024-11-20 15:17:53.356360] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79992 ] 00:24:52.875 [2024-11-20 15:17:53.543108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.875 [2024-11-20 15:17:53.690534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:53.443 [2024-11-20 15:17:54.117053] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:53.443 [2024-11-20 15:17:54.117141] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:53.704 [2024-11-20 15:17:54.285188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.704 [2024-11-20 15:17:54.285266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:53.704 [2024-11-20 15:17:54.285290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:53.704 [2024-11-20 15:17:54.285301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.704 [2024-11-20 15:17:54.285375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.704 [2024-11-20 15:17:54.285389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:53.704 [2024-11-20 15:17:54.285404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:24:53.704 [2024-11-20 15:17:54.285415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.704 [2024-11-20 15:17:54.285438] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:53.704 [2024-11-20 15:17:54.286478] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:53.704 [2024-11-20 15:17:54.286509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.704 [2024-11-20 15:17:54.286520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:53.704 [2024-11-20 15:17:54.286542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.076 ms 00:24:53.704 [2024-11-20 15:17:54.286552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.704 [2024-11-20 15:17:54.289003] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:53.704 [2024-11-20 15:17:54.311191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.704 [2024-11-20 15:17:54.311278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:53.704 [2024-11-20 15:17:54.311299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.221 ms 00:24:53.704 [2024-11-20 15:17:54.311310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.704 [2024-11-20 15:17:54.311455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.704 [2024-11-20 15:17:54.311470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:53.704 [2024-11-20 15:17:54.311483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:24:53.704 [2024-11-20 15:17:54.311494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.704 [2024-11-20 15:17:54.325224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.704 [2024-11-20 15:17:54.325279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:53.704 [2024-11-20 15:17:54.325301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.631 ms 00:24:53.704 [2024-11-20 15:17:54.325320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.704 [2024-11-20 15:17:54.325433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.704 [2024-11-20 15:17:54.325451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:53.704 [2024-11-20 15:17:54.325463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:24:53.704 [2024-11-20 15:17:54.325475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.704 [2024-11-20 15:17:54.325565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.704 [2024-11-20 15:17:54.325578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:53.704 [2024-11-20 15:17:54.325589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:53.704 [2024-11-20 15:17:54.325608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.704 [2024-11-20 15:17:54.325649] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:53.704 [2024-11-20 15:17:54.331564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.704 [2024-11-20 15:17:54.331605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:53.704 [2024-11-20 15:17:54.331621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.942 ms 00:24:53.704 [2024-11-20 15:17:54.331637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.704 [2024-11-20 15:17:54.331686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.704 [2024-11-20 15:17:54.331698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:53.704 [2024-11-20 15:17:54.331710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:24:53.704 [2024-11-20 15:17:54.331730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.704 [2024-11-20 15:17:54.331787] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:53.704 [2024-11-20 15:17:54.331816] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:53.704 [2024-11-20 15:17:54.331857] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:53.704 [2024-11-20 15:17:54.331880] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:53.704 [2024-11-20 15:17:54.331976] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:53.704 [2024-11-20 15:17:54.331990] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:53.704 [2024-11-20 15:17:54.332005] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:53.704 [2024-11-20 15:17:54.332019] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:53.704 [2024-11-20 15:17:54.332032] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:53.704 [2024-11-20 15:17:54.332044] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:53.704 [2024-11-20 15:17:54.332055] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:53.704 [2024-11-20 15:17:54.332066] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:53.704 [2024-11-20 15:17:54.332093] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:53.704 [2024-11-20 15:17:54.332104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.704 [2024-11-20 15:17:54.332115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:53.704 [2024-11-20 15:17:54.332126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:24:53.704 [2024-11-20 15:17:54.332138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.704 [2024-11-20 15:17:54.332220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.704 [2024-11-20 15:17:54.332236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:53.704 [2024-11-20 15:17:54.332247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:24:53.704 [2024-11-20 15:17:54.332257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.704 [2024-11-20 15:17:54.332367] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:53.704 [2024-11-20 15:17:54.332384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:53.704 [2024-11-20 15:17:54.332395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:53.704 [2024-11-20 15:17:54.332406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:53.704 [2024-11-20 15:17:54.332417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:53.705 [2024-11-20 15:17:54.332428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:53.705 [2024-11-20 15:17:54.332438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:53.705 [2024-11-20 15:17:54.332448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:53.705 [2024-11-20 15:17:54.332458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:53.705 [2024-11-20 15:17:54.332468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:53.705 [2024-11-20 15:17:54.332477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:53.705 [2024-11-20 15:17:54.332487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:53.705 [2024-11-20 15:17:54.332497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:53.705 [2024-11-20 15:17:54.332506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:53.705 [2024-11-20 15:17:54.332516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:53.705 [2024-11-20 15:17:54.332537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:53.705 [2024-11-20 15:17:54.332547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:53.705 [2024-11-20 15:17:54.332556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:53.705 [2024-11-20 15:17:54.332566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:53.705 [2024-11-20 15:17:54.332576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:53.705 [2024-11-20 15:17:54.332585] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:53.705 [2024-11-20 15:17:54.332595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:53.705 [2024-11-20 15:17:54.332604] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:53.705 [2024-11-20 15:17:54.332614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:53.705 [2024-11-20 15:17:54.332623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:53.705 [2024-11-20 15:17:54.332633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:53.705 [2024-11-20 15:17:54.332643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:53.705 [2024-11-20 15:17:54.332653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:53.705 [2024-11-20 15:17:54.332662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:53.705 [2024-11-20 15:17:54.332671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:53.705 [2024-11-20 15:17:54.332680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:53.705 [2024-11-20 15:17:54.332690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:53.705 [2024-11-20 15:17:54.332700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:53.705 [2024-11-20 15:17:54.332710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:53.705 [2024-11-20 15:17:54.332731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:53.705 [2024-11-20 15:17:54.332741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:53.705 [2024-11-20 15:17:54.332750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:53.705 [2024-11-20 15:17:54.332763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:53.705 [2024-11-20 15:17:54.332773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:53.705 [2024-11-20 15:17:54.332783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:53.705 [2024-11-20 15:17:54.332793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:53.705 [2024-11-20 15:17:54.332803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:53.705 [2024-11-20 15:17:54.332812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:53.705 [2024-11-20 15:17:54.332822] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:53.705 [2024-11-20 15:17:54.332833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:53.705 [2024-11-20 15:17:54.332843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:53.705 [2024-11-20 15:17:54.332853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:53.705 [2024-11-20 15:17:54.332863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:53.705 [2024-11-20 15:17:54.332873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:53.705 [2024-11-20 15:17:54.332882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:53.705 [2024-11-20 15:17:54.332891] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:53.705 [2024-11-20 15:17:54.332900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:53.705 [2024-11-20 15:17:54.332909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:53.705 [2024-11-20 15:17:54.332920] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:53.705 [2024-11-20 15:17:54.332933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:53.705 [2024-11-20 15:17:54.332945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:53.705 [2024-11-20 15:17:54.332955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:53.705 [2024-11-20 15:17:54.332965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:53.705 [2024-11-20 15:17:54.332975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:53.705 [2024-11-20 15:17:54.332985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:53.705 [2024-11-20 15:17:54.332995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:53.705 [2024-11-20 15:17:54.333005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:53.705 [2024-11-20 15:17:54.333015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:53.705 [2024-11-20 15:17:54.333025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:53.705 [2024-11-20 15:17:54.333037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:53.705 [2024-11-20 15:17:54.333047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:53.705 [2024-11-20 15:17:54.333057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:53.705 [2024-11-20 15:17:54.333067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:53.705 [2024-11-20 15:17:54.333077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:53.705 [2024-11-20 15:17:54.333093] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:53.705 [2024-11-20 15:17:54.333110] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:53.705 [2024-11-20 15:17:54.333122] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:53.705 [2024-11-20 15:17:54.333133] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:53.705 [2024-11-20 15:17:54.333143] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:53.705 [2024-11-20 15:17:54.333155] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:53.705 [2024-11-20 15:17:54.333166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.705 [2024-11-20 15:17:54.333177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:53.705 [2024-11-20 15:17:54.333188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.856 ms 00:24:53.705 [2024-11-20 15:17:54.333198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.705 [2024-11-20 15:17:54.382695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.705 [2024-11-20 15:17:54.382772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:53.705 [2024-11-20 15:17:54.382792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.514 ms 00:24:53.705 [2024-11-20 15:17:54.382804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.705 [2024-11-20 15:17:54.382938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.705 [2024-11-20 15:17:54.382951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:53.705 [2024-11-20 15:17:54.382963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:24:53.705 [2024-11-20 15:17:54.382973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.705 [2024-11-20 15:17:54.449612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.705 [2024-11-20 15:17:54.449674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:53.705 [2024-11-20 15:17:54.449693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.611 ms 00:24:53.705 [2024-11-20 15:17:54.449704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.705 [2024-11-20 15:17:54.449803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.705 [2024-11-20 15:17:54.449816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:53.705 [2024-11-20 15:17:54.449834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:53.705 [2024-11-20 15:17:54.449845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.705 [2024-11-20 15:17:54.450679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.705 [2024-11-20 15:17:54.450700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:53.705 [2024-11-20 15:17:54.450712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.748 ms 00:24:53.705 [2024-11-20 15:17:54.450740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.705 [2024-11-20 15:17:54.450888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.705 [2024-11-20 15:17:54.450903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:53.705 [2024-11-20 15:17:54.450914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:24:53.705 [2024-11-20 15:17:54.450931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.705 [2024-11-20 15:17:54.474555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.706 [2024-11-20 15:17:54.474613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:53.706 [2024-11-20 15:17:54.474637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.635 ms 00:24:53.706 [2024-11-20 15:17:54.474648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.706 [2024-11-20 15:17:54.495093] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:53.706 [2024-11-20 15:17:54.495139] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:53.706 [2024-11-20 15:17:54.495157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.706 [2024-11-20 15:17:54.495170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:53.706 [2024-11-20 15:17:54.495184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.351 ms 00:24:53.706 [2024-11-20 15:17:54.495195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.706 [2024-11-20 15:17:54.526143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.706 [2024-11-20 15:17:54.526191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:53.706 [2024-11-20 15:17:54.526209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.940 ms 00:24:53.706 [2024-11-20 15:17:54.526221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.964 [2024-11-20 15:17:54.544573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.964 [2024-11-20 15:17:54.544619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:53.964 [2024-11-20 15:17:54.544638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.320 ms 00:24:53.964 [2024-11-20 15:17:54.544653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.965 [2024-11-20 15:17:54.563177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.965 [2024-11-20 15:17:54.563237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:53.965 [2024-11-20 15:17:54.563253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.501 ms 00:24:53.965 [2024-11-20 15:17:54.563265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.965 [2024-11-20 15:17:54.564167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.965 [2024-11-20 15:17:54.564191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:53.965 [2024-11-20 15:17:54.564205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.779 ms 00:24:53.965 [2024-11-20 15:17:54.564222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.965 [2024-11-20 15:17:54.661686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.965 [2024-11-20 15:17:54.661774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:53.965 [2024-11-20 15:17:54.661803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.595 ms 00:24:53.965 [2024-11-20 15:17:54.661814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.965 [2024-11-20 15:17:54.673440] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:53.965 [2024-11-20 15:17:54.677931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.965 [2024-11-20 15:17:54.677962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:53.965 [2024-11-20 15:17:54.677978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.062 ms 00:24:53.965 [2024-11-20 15:17:54.677989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.965 [2024-11-20 15:17:54.678118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.965 [2024-11-20 15:17:54.678134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:53.965 [2024-11-20 15:17:54.678146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:53.965 [2024-11-20 15:17:54.678161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.965 [2024-11-20 15:17:54.678251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.965 [2024-11-20 15:17:54.678265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:53.965 [2024-11-20 15:17:54.678277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:24:53.965 [2024-11-20 15:17:54.678288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.965 [2024-11-20 15:17:54.678316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.965 [2024-11-20 15:17:54.678328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:53.965 [2024-11-20 15:17:54.678339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:53.965 [2024-11-20 15:17:54.678349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.965 [2024-11-20 15:17:54.678396] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:53.965 [2024-11-20 15:17:54.678409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.965 [2024-11-20 15:17:54.678421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:53.965 [2024-11-20 15:17:54.678432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:53.965 [2024-11-20 15:17:54.678443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.965 [2024-11-20 15:17:54.716882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.965 [2024-11-20 15:17:54.716923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:53.965 [2024-11-20 15:17:54.716940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.480 ms 00:24:53.965 [2024-11-20 15:17:54.716957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.965 [2024-11-20 15:17:54.717043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.965 [2024-11-20 15:17:54.717057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:53.965 [2024-11-20 15:17:54.717069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:24:53.965 [2024-11-20 15:17:54.717080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.965 [2024-11-20 15:17:54.718572] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 433.565 ms, result 0 00:24:55.345  [2024-11-20T15:17:57.128Z] Copying: 27/1024 [MB] (27 MBps) [2024-11-20T15:17:58.063Z] Copying: 55/1024 [MB] (27 MBps) [2024-11-20T15:17:58.999Z] Copying: 83/1024 [MB] (28 MBps) [2024-11-20T15:18:00.376Z] Copying: 115/1024 [MB] (32 MBps) [2024-11-20T15:18:00.943Z] Copying: 148/1024 [MB] (33 MBps) [2024-11-20T15:18:02.318Z] Copying: 177/1024 [MB] (28 MBps) [2024-11-20T15:18:03.253Z] Copying: 205/1024 [MB] (28 MBps) [2024-11-20T15:18:04.188Z] Copying: 234/1024 [MB] (28 MBps) [2024-11-20T15:18:05.125Z] Copying: 263/1024 [MB] (29 MBps) [2024-11-20T15:18:06.060Z] Copying: 292/1024 [MB] (28 MBps) [2024-11-20T15:18:07.097Z] Copying: 321/1024 [MB] (29 MBps) [2024-11-20T15:18:08.075Z] Copying: 350/1024 [MB] (29 MBps) [2024-11-20T15:18:09.012Z] Copying: 378/1024 [MB] (28 MBps) [2024-11-20T15:18:09.950Z] Copying: 406/1024 [MB] (28 MBps) [2024-11-20T15:18:11.329Z] Copying: 434/1024 [MB] (28 MBps) [2024-11-20T15:18:12.267Z] Copying: 464/1024 [MB] (30 MBps) [2024-11-20T15:18:13.203Z] Copying: 492/1024 [MB] (27 MBps) [2024-11-20T15:18:14.138Z] Copying: 521/1024 [MB] (28 MBps) [2024-11-20T15:18:15.075Z] Copying: 550/1024 [MB] (29 MBps) [2024-11-20T15:18:16.013Z] Copying: 579/1024 [MB] (28 MBps) [2024-11-20T15:18:16.950Z] Copying: 609/1024 [MB] (29 MBps) [2024-11-20T15:18:18.328Z] Copying: 637/1024 [MB] (28 MBps) [2024-11-20T15:18:19.316Z] Copying: 667/1024 [MB] (29 MBps) [2024-11-20T15:18:20.253Z] Copying: 695/1024 [MB] (27 MBps) [2024-11-20T15:18:21.190Z] Copying: 723/1024 [MB] (28 MBps) [2024-11-20T15:18:22.127Z] Copying: 753/1024 [MB] (29 MBps) [2024-11-20T15:18:23.065Z] Copying: 781/1024 [MB] (27 MBps) [2024-11-20T15:18:24.001Z] Copying: 812/1024 [MB] (30 MBps) [2024-11-20T15:18:24.937Z] Copying: 841/1024 [MB] (29 MBps) [2024-11-20T15:18:26.314Z] Copying: 871/1024 [MB] (30 MBps) [2024-11-20T15:18:26.903Z] Copying: 900/1024 [MB] (28 MBps) [2024-11-20T15:18:28.282Z] Copying: 928/1024 [MB] (28 MBps) [2024-11-20T15:18:29.218Z] Copying: 956/1024 [MB] (28 MBps) [2024-11-20T15:18:30.156Z] Copying: 984/1024 [MB] (28 MBps) [2024-11-20T15:18:30.415Z] Copying: 1012/1024 [MB] (27 MBps) [2024-11-20T15:18:31.800Z] Copying: 1024/1024 [MB] (average 28 MBps)[2024-11-20 15:18:31.524066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.964 [2024-11-20 15:18:31.524142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:30.964 [2024-11-20 15:18:31.524160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:30.964 [2024-11-20 15:18:31.524172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.964 [2024-11-20 15:18:31.524196] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:30.964 [2024-11-20 15:18:31.528773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.964 [2024-11-20 15:18:31.528831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:30.964 [2024-11-20 15:18:31.528859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.563 ms 00:25:30.964 [2024-11-20 15:18:31.528871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.964 [2024-11-20 15:18:31.529094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.964 [2024-11-20 15:18:31.529109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:30.964 [2024-11-20 15:18:31.529121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.185 ms 00:25:30.964 [2024-11-20 15:18:31.529132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.964 [2024-11-20 15:18:31.531998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.964 [2024-11-20 15:18:31.532030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:30.964 [2024-11-20 15:18:31.532043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.854 ms 00:25:30.964 [2024-11-20 15:18:31.532053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.964 [2024-11-20 15:18:31.537261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.964 [2024-11-20 15:18:31.537314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:30.964 [2024-11-20 15:18:31.537327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.187 ms 00:25:30.964 [2024-11-20 15:18:31.537338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.964 [2024-11-20 15:18:31.579776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.964 [2024-11-20 15:18:31.579852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:30.964 [2024-11-20 15:18:31.579869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.408 ms 00:25:30.964 [2024-11-20 15:18:31.579881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.964 [2024-11-20 15:18:31.603537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.964 [2024-11-20 15:18:31.603614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:30.964 [2024-11-20 15:18:31.603633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.599 ms 00:25:30.964 [2024-11-20 15:18:31.603644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.964 [2024-11-20 15:18:31.603846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.964 [2024-11-20 15:18:31.603879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:30.964 [2024-11-20 15:18:31.603891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:25:30.964 [2024-11-20 15:18:31.603904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.964 [2024-11-20 15:18:31.645612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.964 [2024-11-20 15:18:31.645685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:30.964 [2024-11-20 15:18:31.645704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.752 ms 00:25:30.964 [2024-11-20 15:18:31.645715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.964 [2024-11-20 15:18:31.686272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.964 [2024-11-20 15:18:31.686362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:30.964 [2024-11-20 15:18:31.686380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.511 ms 00:25:30.964 [2024-11-20 15:18:31.686391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.964 [2024-11-20 15:18:31.726004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.964 [2024-11-20 15:18:31.726073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:30.964 [2024-11-20 15:18:31.726091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.586 ms 00:25:30.964 [2024-11-20 15:18:31.726101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.964 [2024-11-20 15:18:31.766401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.964 [2024-11-20 15:18:31.766470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:30.964 [2024-11-20 15:18:31.766488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.229 ms 00:25:30.964 [2024-11-20 15:18:31.766499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.964 [2024-11-20 15:18:31.766572] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:30.964 [2024-11-20 15:18:31.766593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:30.964 [2024-11-20 15:18:31.766617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:30.964 [2024-11-20 15:18:31.766629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:30.964 [2024-11-20 15:18:31.766642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:30.964 [2024-11-20 15:18:31.766658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:30.964 [2024-11-20 15:18:31.766670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:30.964 [2024-11-20 15:18:31.766682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:30.964 [2024-11-20 15:18:31.766695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:30.964 [2024-11-20 15:18:31.766720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:30.964 [2024-11-20 15:18:31.766749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:30.964 [2024-11-20 15:18:31.766761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:30.964 [2024-11-20 15:18:31.766773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:30.964 [2024-11-20 15:18:31.766784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:30.964 [2024-11-20 15:18:31.766796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:30.964 [2024-11-20 15:18:31.766808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:30.964 [2024-11-20 15:18:31.766820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:30.964 [2024-11-20 15:18:31.766832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:30.964 [2024-11-20 15:18:31.766843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:30.964 [2024-11-20 15:18:31.766854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:30.964 [2024-11-20 15:18:31.766865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:30.964 [2024-11-20 15:18:31.766876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:30.964 [2024-11-20 15:18:31.766887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:30.964 [2024-11-20 15:18:31.766897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:30.964 [2024-11-20 15:18:31.766908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:30.964 [2024-11-20 15:18:31.766919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:30.964 [2024-11-20 15:18:31.766930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:30.964 [2024-11-20 15:18:31.766941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.766952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.766964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.766977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.766994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:30.965 [2024-11-20 15:18:31.767840] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:30.965 [2024-11-20 15:18:31.767858] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7d8770c9-24dd-42ab-a5d0-936f5b553fe3 00:25:30.965 [2024-11-20 15:18:31.767870] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:30.965 [2024-11-20 15:18:31.767881] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:30.965 [2024-11-20 15:18:31.767892] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:30.965 [2024-11-20 15:18:31.767903] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:30.965 [2024-11-20 15:18:31.767913] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:30.965 [2024-11-20 15:18:31.767924] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:30.965 [2024-11-20 15:18:31.767952] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:30.965 [2024-11-20 15:18:31.767962] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:30.965 [2024-11-20 15:18:31.767971] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:30.965 [2024-11-20 15:18:31.767983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.965 [2024-11-20 15:18:31.767995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:30.965 [2024-11-20 15:18:31.768007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.415 ms 00:25:30.965 [2024-11-20 15:18:31.768017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.965 [2024-11-20 15:18:31.789516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.965 [2024-11-20 15:18:31.789577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:30.965 [2024-11-20 15:18:31.789594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.471 ms 00:25:30.965 [2024-11-20 15:18:31.789614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.965 [2024-11-20 15:18:31.790280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.966 [2024-11-20 15:18:31.790306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:30.966 [2024-11-20 15:18:31.790320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.622 ms 00:25:30.966 [2024-11-20 15:18:31.790341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.225 [2024-11-20 15:18:31.844344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.225 [2024-11-20 15:18:31.844412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:31.225 [2024-11-20 15:18:31.844428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.225 [2024-11-20 15:18:31.844439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.225 [2024-11-20 15:18:31.844519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.225 [2024-11-20 15:18:31.844531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:31.225 [2024-11-20 15:18:31.844542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.225 [2024-11-20 15:18:31.844559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.225 [2024-11-20 15:18:31.844686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.225 [2024-11-20 15:18:31.844705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:31.225 [2024-11-20 15:18:31.844718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.225 [2024-11-20 15:18:31.844730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.225 [2024-11-20 15:18:31.844762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.225 [2024-11-20 15:18:31.844774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:31.225 [2024-11-20 15:18:31.844786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.225 [2024-11-20 15:18:31.844797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.225 [2024-11-20 15:18:31.974841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.225 [2024-11-20 15:18:31.974923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:31.225 [2024-11-20 15:18:31.974941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.225 [2024-11-20 15:18:31.974952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.484 [2024-11-20 15:18:32.084645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.484 [2024-11-20 15:18:32.084737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:31.484 [2024-11-20 15:18:32.084755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.484 [2024-11-20 15:18:32.084778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.484 [2024-11-20 15:18:32.084879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.484 [2024-11-20 15:18:32.084893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:31.484 [2024-11-20 15:18:32.084906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.484 [2024-11-20 15:18:32.084916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.484 [2024-11-20 15:18:32.084961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.484 [2024-11-20 15:18:32.084973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:31.484 [2024-11-20 15:18:32.084985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.484 [2024-11-20 15:18:32.084997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.484 [2024-11-20 15:18:32.085120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.484 [2024-11-20 15:18:32.085138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:31.484 [2024-11-20 15:18:32.085150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.484 [2024-11-20 15:18:32.085161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.484 [2024-11-20 15:18:32.085200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.484 [2024-11-20 15:18:32.085213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:31.484 [2024-11-20 15:18:32.085224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.484 [2024-11-20 15:18:32.085234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.484 [2024-11-20 15:18:32.085280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.484 [2024-11-20 15:18:32.085293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:31.484 [2024-11-20 15:18:32.085304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.484 [2024-11-20 15:18:32.085315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.484 [2024-11-20 15:18:32.085359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.484 [2024-11-20 15:18:32.085372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:31.484 [2024-11-20 15:18:32.085384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.484 [2024-11-20 15:18:32.085395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.484 [2024-11-20 15:18:32.085522] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 562.521 ms, result 0 00:25:32.420 00:25:32.420 00:25:32.420 15:18:33 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:34.323 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:34.323 15:18:35 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:25:34.323 [2024-11-20 15:18:35.138965] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:25:34.323 [2024-11-20 15:18:35.139132] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80413 ] 00:25:34.581 [2024-11-20 15:18:35.325842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:34.840 [2024-11-20 15:18:35.472526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:35.124 [2024-11-20 15:18:35.906147] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:35.124 [2024-11-20 15:18:35.906239] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:35.385 [2024-11-20 15:18:36.073880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.385 [2024-11-20 15:18:36.073941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:35.385 [2024-11-20 15:18:36.073966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:35.385 [2024-11-20 15:18:36.073978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.385 [2024-11-20 15:18:36.074046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.385 [2024-11-20 15:18:36.074059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:35.385 [2024-11-20 15:18:36.074075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:25:35.385 [2024-11-20 15:18:36.074086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.385 [2024-11-20 15:18:36.074121] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:35.385 [2024-11-20 15:18:36.075170] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:35.385 [2024-11-20 15:18:36.075204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.385 [2024-11-20 15:18:36.075217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:35.385 [2024-11-20 15:18:36.075229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.090 ms 00:25:35.385 [2024-11-20 15:18:36.075240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.385 [2024-11-20 15:18:36.077741] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:35.385 [2024-11-20 15:18:36.099033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.385 [2024-11-20 15:18:36.099100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:35.385 [2024-11-20 15:18:36.099119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.323 ms 00:25:35.385 [2024-11-20 15:18:36.099131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.385 [2024-11-20 15:18:36.099246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.385 [2024-11-20 15:18:36.099262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:35.385 [2024-11-20 15:18:36.099275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:25:35.385 [2024-11-20 15:18:36.099285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.385 [2024-11-20 15:18:36.113057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.385 [2024-11-20 15:18:36.113123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:35.385 [2024-11-20 15:18:36.113143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.680 ms 00:25:35.385 [2024-11-20 15:18:36.113163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.385 [2024-11-20 15:18:36.113293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.385 [2024-11-20 15:18:36.113310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:35.385 [2024-11-20 15:18:36.113322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:25:35.385 [2024-11-20 15:18:36.113333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.385 [2024-11-20 15:18:36.113439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.385 [2024-11-20 15:18:36.113453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:35.385 [2024-11-20 15:18:36.113464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:35.385 [2024-11-20 15:18:36.113474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.385 [2024-11-20 15:18:36.113511] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:35.385 [2024-11-20 15:18:36.119671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.385 [2024-11-20 15:18:36.119736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:35.385 [2024-11-20 15:18:36.119753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.182 ms 00:25:35.385 [2024-11-20 15:18:36.119769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.385 [2024-11-20 15:18:36.119822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.385 [2024-11-20 15:18:36.119834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:35.385 [2024-11-20 15:18:36.119846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:25:35.385 [2024-11-20 15:18:36.119857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.385 [2024-11-20 15:18:36.119918] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:35.385 [2024-11-20 15:18:36.119946] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:35.385 [2024-11-20 15:18:36.119988] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:35.385 [2024-11-20 15:18:36.120013] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:35.385 [2024-11-20 15:18:36.120114] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:35.385 [2024-11-20 15:18:36.120128] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:35.385 [2024-11-20 15:18:36.120143] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:35.385 [2024-11-20 15:18:36.120157] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:35.385 [2024-11-20 15:18:36.120170] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:35.385 [2024-11-20 15:18:36.120182] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:35.385 [2024-11-20 15:18:36.120193] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:35.385 [2024-11-20 15:18:36.120204] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:35.385 [2024-11-20 15:18:36.120219] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:35.385 [2024-11-20 15:18:36.120230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.385 [2024-11-20 15:18:36.120241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:35.385 [2024-11-20 15:18:36.120252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:25:35.385 [2024-11-20 15:18:36.120263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.385 [2024-11-20 15:18:36.120344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.385 [2024-11-20 15:18:36.120357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:35.385 [2024-11-20 15:18:36.120368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:25:35.385 [2024-11-20 15:18:36.120379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.385 [2024-11-20 15:18:36.120488] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:35.385 [2024-11-20 15:18:36.120512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:35.385 [2024-11-20 15:18:36.120524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:35.385 [2024-11-20 15:18:36.120535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:35.385 [2024-11-20 15:18:36.120546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:35.386 [2024-11-20 15:18:36.120557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:35.386 [2024-11-20 15:18:36.120567] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:35.386 [2024-11-20 15:18:36.120576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:35.386 [2024-11-20 15:18:36.120586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:35.386 [2024-11-20 15:18:36.120596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:35.386 [2024-11-20 15:18:36.120607] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:35.386 [2024-11-20 15:18:36.120616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:35.386 [2024-11-20 15:18:36.120626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:35.386 [2024-11-20 15:18:36.120635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:35.386 [2024-11-20 15:18:36.120646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:35.386 [2024-11-20 15:18:36.120668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:35.386 [2024-11-20 15:18:36.120678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:35.386 [2024-11-20 15:18:36.120687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:35.386 [2024-11-20 15:18:36.120697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:35.386 [2024-11-20 15:18:36.120707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:35.386 [2024-11-20 15:18:36.120728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:35.386 [2024-11-20 15:18:36.120738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:35.386 [2024-11-20 15:18:36.120748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:35.386 [2024-11-20 15:18:36.120758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:35.386 [2024-11-20 15:18:36.120767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:35.386 [2024-11-20 15:18:36.120777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:35.386 [2024-11-20 15:18:36.120786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:35.386 [2024-11-20 15:18:36.120796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:35.386 [2024-11-20 15:18:36.120806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:35.386 [2024-11-20 15:18:36.120815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:35.386 [2024-11-20 15:18:36.120824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:35.386 [2024-11-20 15:18:36.120834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:35.386 [2024-11-20 15:18:36.120844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:35.386 [2024-11-20 15:18:36.120852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:35.386 [2024-11-20 15:18:36.120861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:35.386 [2024-11-20 15:18:36.120871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:35.386 [2024-11-20 15:18:36.120880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:35.386 [2024-11-20 15:18:36.120889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:35.386 [2024-11-20 15:18:36.120899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:35.386 [2024-11-20 15:18:36.120909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:35.386 [2024-11-20 15:18:36.120919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:35.386 [2024-11-20 15:18:36.120929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:35.386 [2024-11-20 15:18:36.120938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:35.386 [2024-11-20 15:18:36.120948] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:35.386 [2024-11-20 15:18:36.120958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:35.386 [2024-11-20 15:18:36.120969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:35.386 [2024-11-20 15:18:36.120979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:35.386 [2024-11-20 15:18:36.120989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:35.386 [2024-11-20 15:18:36.120999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:35.386 [2024-11-20 15:18:36.121008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:35.386 [2024-11-20 15:18:36.121017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:35.386 [2024-11-20 15:18:36.121026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:35.386 [2024-11-20 15:18:36.121035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:35.386 [2024-11-20 15:18:36.121046] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:35.386 [2024-11-20 15:18:36.121059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:35.386 [2024-11-20 15:18:36.121071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:35.386 [2024-11-20 15:18:36.121082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:35.386 [2024-11-20 15:18:36.121093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:35.386 [2024-11-20 15:18:36.121103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:35.386 [2024-11-20 15:18:36.121114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:35.386 [2024-11-20 15:18:36.121125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:35.386 [2024-11-20 15:18:36.121136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:35.386 [2024-11-20 15:18:36.121147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:35.386 [2024-11-20 15:18:36.121158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:35.386 [2024-11-20 15:18:36.121168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:35.386 [2024-11-20 15:18:36.121178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:35.386 [2024-11-20 15:18:36.121190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:35.386 [2024-11-20 15:18:36.121200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:35.386 [2024-11-20 15:18:36.121211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:35.386 [2024-11-20 15:18:36.121221] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:35.386 [2024-11-20 15:18:36.121238] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:35.386 [2024-11-20 15:18:36.121250] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:35.386 [2024-11-20 15:18:36.121261] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:35.386 [2024-11-20 15:18:36.121272] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:35.386 [2024-11-20 15:18:36.121283] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:35.386 [2024-11-20 15:18:36.121294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.386 [2024-11-20 15:18:36.121306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:35.386 [2024-11-20 15:18:36.121317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.863 ms 00:25:35.386 [2024-11-20 15:18:36.121327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.386 [2024-11-20 15:18:36.171312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.386 [2024-11-20 15:18:36.171378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:35.386 [2024-11-20 15:18:36.171397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.002 ms 00:25:35.386 [2024-11-20 15:18:36.171409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.386 [2024-11-20 15:18:36.171541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.386 [2024-11-20 15:18:36.171553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:35.386 [2024-11-20 15:18:36.171566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:25:35.386 [2024-11-20 15:18:36.171576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.645 [2024-11-20 15:18:36.233886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.645 [2024-11-20 15:18:36.233960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:35.645 [2024-11-20 15:18:36.233979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.278 ms 00:25:35.645 [2024-11-20 15:18:36.233992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.645 [2024-11-20 15:18:36.234077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.645 [2024-11-20 15:18:36.234090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:35.645 [2024-11-20 15:18:36.234109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:35.645 [2024-11-20 15:18:36.234120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.646 [2024-11-20 15:18:36.234954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.646 [2024-11-20 15:18:36.234971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:35.646 [2024-11-20 15:18:36.234984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.746 ms 00:25:35.646 [2024-11-20 15:18:36.234995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.646 [2024-11-20 15:18:36.235141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.646 [2024-11-20 15:18:36.235156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:35.646 [2024-11-20 15:18:36.235169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:25:35.646 [2024-11-20 15:18:36.235187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.646 [2024-11-20 15:18:36.257131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.646 [2024-11-20 15:18:36.257203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:35.646 [2024-11-20 15:18:36.257228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.951 ms 00:25:35.646 [2024-11-20 15:18:36.257239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.646 [2024-11-20 15:18:36.279469] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:35.646 [2024-11-20 15:18:36.279543] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:35.646 [2024-11-20 15:18:36.279564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.646 [2024-11-20 15:18:36.279577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:35.646 [2024-11-20 15:18:36.279593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.170 ms 00:25:35.646 [2024-11-20 15:18:36.279604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.646 [2024-11-20 15:18:36.312982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.646 [2024-11-20 15:18:36.313096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:35.646 [2024-11-20 15:18:36.313119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.338 ms 00:25:35.646 [2024-11-20 15:18:36.313131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.646 [2024-11-20 15:18:36.335463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.646 [2024-11-20 15:18:36.335559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:35.646 [2024-11-20 15:18:36.335579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.247 ms 00:25:35.646 [2024-11-20 15:18:36.335590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.646 [2024-11-20 15:18:36.357768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.646 [2024-11-20 15:18:36.357866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:35.646 [2024-11-20 15:18:36.357886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.102 ms 00:25:35.646 [2024-11-20 15:18:36.357897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.646 [2024-11-20 15:18:36.358919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.646 [2024-11-20 15:18:36.358958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:35.646 [2024-11-20 15:18:36.358972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.795 ms 00:25:35.646 [2024-11-20 15:18:36.358989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.646 [2024-11-20 15:18:36.463560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.646 [2024-11-20 15:18:36.463659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:35.646 [2024-11-20 15:18:36.463687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 104.705 ms 00:25:35.646 [2024-11-20 15:18:36.463698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.904 [2024-11-20 15:18:36.479814] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:35.904 [2024-11-20 15:18:36.485058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.904 [2024-11-20 15:18:36.485122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:35.904 [2024-11-20 15:18:36.485143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.267 ms 00:25:35.904 [2024-11-20 15:18:36.485155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.904 [2024-11-20 15:18:36.485328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.904 [2024-11-20 15:18:36.485344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:35.904 [2024-11-20 15:18:36.485357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:35.904 [2024-11-20 15:18:36.485373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.904 [2024-11-20 15:18:36.485501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.904 [2024-11-20 15:18:36.485515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:35.904 [2024-11-20 15:18:36.485527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:25:35.904 [2024-11-20 15:18:36.485538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.904 [2024-11-20 15:18:36.485566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.904 [2024-11-20 15:18:36.485579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:35.904 [2024-11-20 15:18:36.485590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:35.904 [2024-11-20 15:18:36.485609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.904 [2024-11-20 15:18:36.485658] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:35.904 [2024-11-20 15:18:36.485672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.904 [2024-11-20 15:18:36.485683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:35.904 [2024-11-20 15:18:36.485695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:35.904 [2024-11-20 15:18:36.485706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.904 [2024-11-20 15:18:36.527685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.904 [2024-11-20 15:18:36.527788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:35.904 [2024-11-20 15:18:36.527810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.006 ms 00:25:35.904 [2024-11-20 15:18:36.527835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.904 [2024-11-20 15:18:36.527998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.904 [2024-11-20 15:18:36.528013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:35.904 [2024-11-20 15:18:36.528025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:25:35.904 [2024-11-20 15:18:36.528035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.904 [2024-11-20 15:18:36.529654] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 455.938 ms, result 0 00:25:36.839  [2024-11-20T15:18:38.612Z] Copying: 26/1024 [MB] (26 MBps) [2024-11-20T15:18:39.589Z] Copying: 52/1024 [MB] (26 MBps) [2024-11-20T15:18:40.961Z] Copying: 79/1024 [MB] (26 MBps) [2024-11-20T15:18:41.897Z] Copying: 105/1024 [MB] (26 MBps) [2024-11-20T15:18:42.833Z] Copying: 132/1024 [MB] (27 MBps) [2024-11-20T15:18:43.770Z] Copying: 160/1024 [MB] (27 MBps) [2024-11-20T15:18:44.706Z] Copying: 186/1024 [MB] (26 MBps) [2024-11-20T15:18:45.641Z] Copying: 214/1024 [MB] (27 MBps) [2024-11-20T15:18:46.576Z] Copying: 242/1024 [MB] (28 MBps) [2024-11-20T15:18:47.952Z] Copying: 270/1024 [MB] (27 MBps) [2024-11-20T15:18:48.887Z] Copying: 298/1024 [MB] (28 MBps) [2024-11-20T15:18:49.822Z] Copying: 326/1024 [MB] (27 MBps) [2024-11-20T15:18:50.758Z] Copying: 353/1024 [MB] (27 MBps) [2024-11-20T15:18:51.694Z] Copying: 379/1024 [MB] (26 MBps) [2024-11-20T15:18:52.630Z] Copying: 404/1024 [MB] (25 MBps) [2024-11-20T15:18:53.566Z] Copying: 432/1024 [MB] (27 MBps) [2024-11-20T15:18:54.945Z] Copying: 459/1024 [MB] (27 MBps) [2024-11-20T15:18:55.517Z] Copying: 484/1024 [MB] (25 MBps) [2024-11-20T15:18:56.895Z] Copying: 508/1024 [MB] (24 MBps) [2024-11-20T15:18:57.831Z] Copying: 533/1024 [MB] (25 MBps) [2024-11-20T15:18:58.768Z] Copying: 560/1024 [MB] (26 MBps) [2024-11-20T15:18:59.720Z] Copying: 588/1024 [MB] (27 MBps) [2024-11-20T15:19:00.657Z] Copying: 617/1024 [MB] (29 MBps) [2024-11-20T15:19:01.593Z] Copying: 644/1024 [MB] (27 MBps) [2024-11-20T15:19:02.529Z] Copying: 673/1024 [MB] (28 MBps) [2024-11-20T15:19:03.906Z] Copying: 702/1024 [MB] (28 MBps) [2024-11-20T15:19:04.848Z] Copying: 730/1024 [MB] (28 MBps) [2024-11-20T15:19:05.786Z] Copying: 759/1024 [MB] (28 MBps) [2024-11-20T15:19:06.723Z] Copying: 788/1024 [MB] (28 MBps) [2024-11-20T15:19:07.659Z] Copying: 816/1024 [MB] (28 MBps) [2024-11-20T15:19:08.651Z] Copying: 844/1024 [MB] (28 MBps) [2024-11-20T15:19:09.586Z] Copying: 871/1024 [MB] (27 MBps) [2024-11-20T15:19:10.521Z] Copying: 898/1024 [MB] (27 MBps) [2024-11-20T15:19:11.898Z] Copying: 927/1024 [MB] (28 MBps) [2024-11-20T15:19:12.834Z] Copying: 953/1024 [MB] (26 MBps) [2024-11-20T15:19:13.811Z] Copying: 980/1024 [MB] (26 MBps) [2024-11-20T15:19:14.747Z] Copying: 1007/1024 [MB] (26 MBps) [2024-11-20T15:19:15.005Z] Copying: 1023/1024 [MB] (15 MBps) [2024-11-20T15:19:15.005Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-11-20 15:19:14.938353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.169 [2024-11-20 15:19:14.938554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:14.169 [2024-11-20 15:19:14.938653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:14.169 [2024-11-20 15:19:14.938705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.169 [2024-11-20 15:19:14.940813] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:14.169 [2024-11-20 15:19:14.947185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.169 [2024-11-20 15:19:14.947326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:14.169 [2024-11-20 15:19:14.947451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.158 ms 00:26:14.169 [2024-11-20 15:19:14.947491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.169 [2024-11-20 15:19:14.959290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.169 [2024-11-20 15:19:14.959380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:14.169 [2024-11-20 15:19:14.959424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.921 ms 00:26:14.169 [2024-11-20 15:19:14.959466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.169 [2024-11-20 15:19:14.984493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.169 [2024-11-20 15:19:14.984674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:14.169 [2024-11-20 15:19:14.984778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.024 ms 00:26:14.169 [2024-11-20 15:19:14.984820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.170 [2024-11-20 15:19:14.990411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.170 [2024-11-20 15:19:14.990556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:14.170 [2024-11-20 15:19:14.990578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.535 ms 00:26:14.170 [2024-11-20 15:19:14.990590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.429 [2024-11-20 15:19:15.031272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.429 [2024-11-20 15:19:15.031337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:14.429 [2024-11-20 15:19:15.031370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.681 ms 00:26:14.429 [2024-11-20 15:19:15.031395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.429 [2024-11-20 15:19:15.054188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.429 [2024-11-20 15:19:15.054249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:14.429 [2024-11-20 15:19:15.054267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.780 ms 00:26:14.429 [2024-11-20 15:19:15.054279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.429 [2024-11-20 15:19:15.160617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.429 [2024-11-20 15:19:15.160742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:14.429 [2024-11-20 15:19:15.160781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 106.432 ms 00:26:14.429 [2024-11-20 15:19:15.160795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.429 [2024-11-20 15:19:15.203139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.429 [2024-11-20 15:19:15.203216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:14.429 [2024-11-20 15:19:15.203238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.386 ms 00:26:14.429 [2024-11-20 15:19:15.203249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.429 [2024-11-20 15:19:15.243538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.429 [2024-11-20 15:19:15.243631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:14.429 [2024-11-20 15:19:15.243651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.300 ms 00:26:14.429 [2024-11-20 15:19:15.243662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.689 [2024-11-20 15:19:15.281194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.689 [2024-11-20 15:19:15.281265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:14.689 [2024-11-20 15:19:15.281284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.533 ms 00:26:14.689 [2024-11-20 15:19:15.281295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.689 [2024-11-20 15:19:15.319974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.689 [2024-11-20 15:19:15.320045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:14.689 [2024-11-20 15:19:15.320063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.623 ms 00:26:14.689 [2024-11-20 15:19:15.320075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.689 [2024-11-20 15:19:15.320156] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:14.689 [2024-11-20 15:19:15.320183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 118528 / 261120 wr_cnt: 1 state: open 00:26:14.689 [2024-11-20 15:19:15.320199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.320994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.321005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.321016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.321027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.321038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.321049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.321060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:14.689 [2024-11-20 15:19:15.321071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:14.690 [2024-11-20 15:19:15.321082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:14.690 [2024-11-20 15:19:15.321093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:14.690 [2024-11-20 15:19:15.321103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:14.690 [2024-11-20 15:19:15.321114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:14.690 [2024-11-20 15:19:15.321124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:14.690 [2024-11-20 15:19:15.321134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:14.690 [2024-11-20 15:19:15.321144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:14.690 [2024-11-20 15:19:15.321155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:14.690 [2024-11-20 15:19:15.321165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:14.690 [2024-11-20 15:19:15.321176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:14.690 [2024-11-20 15:19:15.321186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:14.690 [2024-11-20 15:19:15.321197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:14.690 [2024-11-20 15:19:15.321207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:14.690 [2024-11-20 15:19:15.321217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:14.690 [2024-11-20 15:19:15.321245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:14.690 [2024-11-20 15:19:15.321256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:14.690 [2024-11-20 15:19:15.321269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:14.690 [2024-11-20 15:19:15.321281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:14.690 [2024-11-20 15:19:15.321292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:14.690 [2024-11-20 15:19:15.321303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:14.690 [2024-11-20 15:19:15.321315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:14.690 [2024-11-20 15:19:15.321326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:14.690 [2024-11-20 15:19:15.321350] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:14.690 [2024-11-20 15:19:15.321363] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7d8770c9-24dd-42ab-a5d0-936f5b553fe3 00:26:14.690 [2024-11-20 15:19:15.321375] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 118528 00:26:14.690 [2024-11-20 15:19:15.321387] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 119488 00:26:14.690 [2024-11-20 15:19:15.321398] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 118528 00:26:14.690 [2024-11-20 15:19:15.321410] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0081 00:26:14.690 [2024-11-20 15:19:15.321421] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:14.690 [2024-11-20 15:19:15.321442] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:14.690 [2024-11-20 15:19:15.321467] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:14.690 [2024-11-20 15:19:15.321477] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:14.690 [2024-11-20 15:19:15.321487] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:14.690 [2024-11-20 15:19:15.321498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.690 [2024-11-20 15:19:15.321510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:14.690 [2024-11-20 15:19:15.321522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.370 ms 00:26:14.690 [2024-11-20 15:19:15.321533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.690 [2024-11-20 15:19:15.343544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.690 [2024-11-20 15:19:15.343588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:14.690 [2024-11-20 15:19:15.343602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.997 ms 00:26:14.690 [2024-11-20 15:19:15.343619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.690 [2024-11-20 15:19:15.344212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.690 [2024-11-20 15:19:15.344224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:14.690 [2024-11-20 15:19:15.344237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.565 ms 00:26:14.690 [2024-11-20 15:19:15.344248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.690 [2024-11-20 15:19:15.401429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.690 [2024-11-20 15:19:15.401498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:14.690 [2024-11-20 15:19:15.401514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.690 [2024-11-20 15:19:15.401525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.690 [2024-11-20 15:19:15.401617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.690 [2024-11-20 15:19:15.401629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:14.690 [2024-11-20 15:19:15.401641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.690 [2024-11-20 15:19:15.401651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.690 [2024-11-20 15:19:15.401746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.690 [2024-11-20 15:19:15.401760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:14.690 [2024-11-20 15:19:15.401778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.690 [2024-11-20 15:19:15.401788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.690 [2024-11-20 15:19:15.401808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.690 [2024-11-20 15:19:15.401819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:14.690 [2024-11-20 15:19:15.401830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.690 [2024-11-20 15:19:15.401840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.950 [2024-11-20 15:19:15.542608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.950 [2024-11-20 15:19:15.542699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:14.950 [2024-11-20 15:19:15.542754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.950 [2024-11-20 15:19:15.542766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.950 [2024-11-20 15:19:15.655853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.950 [2024-11-20 15:19:15.655939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:14.950 [2024-11-20 15:19:15.655958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.950 [2024-11-20 15:19:15.655970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.950 [2024-11-20 15:19:15.656094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.950 [2024-11-20 15:19:15.656108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:14.950 [2024-11-20 15:19:15.656120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.950 [2024-11-20 15:19:15.656140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.950 [2024-11-20 15:19:15.656195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.950 [2024-11-20 15:19:15.656208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:14.950 [2024-11-20 15:19:15.656220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.950 [2024-11-20 15:19:15.656230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.950 [2024-11-20 15:19:15.656378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.950 [2024-11-20 15:19:15.656392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:14.950 [2024-11-20 15:19:15.656405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.950 [2024-11-20 15:19:15.656415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.950 [2024-11-20 15:19:15.656459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.950 [2024-11-20 15:19:15.656472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:14.950 [2024-11-20 15:19:15.656483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.950 [2024-11-20 15:19:15.656493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.950 [2024-11-20 15:19:15.656541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.950 [2024-11-20 15:19:15.656552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:14.950 [2024-11-20 15:19:15.656563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.950 [2024-11-20 15:19:15.656573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.950 [2024-11-20 15:19:15.656627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.950 [2024-11-20 15:19:15.656641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:14.950 [2024-11-20 15:19:15.656651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.950 [2024-11-20 15:19:15.656662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.950 [2024-11-20 15:19:15.656832] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 722.365 ms, result 0 00:26:16.854 00:26:16.854 00:26:16.854 15:19:17 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:26:16.854 [2024-11-20 15:19:17.464658] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:26:16.854 [2024-11-20 15:19:17.464834] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80827 ] 00:26:16.854 [2024-11-20 15:19:17.654665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.161 [2024-11-20 15:19:17.806723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:17.419 [2024-11-20 15:19:18.245291] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:17.419 [2024-11-20 15:19:18.245384] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:17.679 [2024-11-20 15:19:18.413081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.679 [2024-11-20 15:19:18.413153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:17.679 [2024-11-20 15:19:18.413176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:17.679 [2024-11-20 15:19:18.413188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.679 [2024-11-20 15:19:18.413245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.679 [2024-11-20 15:19:18.413258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:17.679 [2024-11-20 15:19:18.413274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:26:17.679 [2024-11-20 15:19:18.413284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.679 [2024-11-20 15:19:18.413307] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:17.679 [2024-11-20 15:19:18.414361] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:17.679 [2024-11-20 15:19:18.414389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.679 [2024-11-20 15:19:18.414402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:17.679 [2024-11-20 15:19:18.414415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.088 ms 00:26:17.679 [2024-11-20 15:19:18.414426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.679 [2024-11-20 15:19:18.416945] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:17.679 [2024-11-20 15:19:18.438746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.679 [2024-11-20 15:19:18.438837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:17.679 [2024-11-20 15:19:18.438858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.823 ms 00:26:17.679 [2024-11-20 15:19:18.438870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.679 [2024-11-20 15:19:18.438960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.679 [2024-11-20 15:19:18.438975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:17.679 [2024-11-20 15:19:18.438987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:26:17.679 [2024-11-20 15:19:18.438998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.679 [2024-11-20 15:19:18.451736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.679 [2024-11-20 15:19:18.451784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:17.679 [2024-11-20 15:19:18.451799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.673 ms 00:26:17.679 [2024-11-20 15:19:18.451815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.679 [2024-11-20 15:19:18.451911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.679 [2024-11-20 15:19:18.451925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:17.679 [2024-11-20 15:19:18.451936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:26:17.679 [2024-11-20 15:19:18.451947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.679 [2024-11-20 15:19:18.452017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.679 [2024-11-20 15:19:18.452046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:17.679 [2024-11-20 15:19:18.452058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:26:17.679 [2024-11-20 15:19:18.452070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.679 [2024-11-20 15:19:18.452105] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:17.679 [2024-11-20 15:19:18.458001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.679 [2024-11-20 15:19:18.458049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:17.679 [2024-11-20 15:19:18.458063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.919 ms 00:26:17.679 [2024-11-20 15:19:18.458080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.679 [2024-11-20 15:19:18.458118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.679 [2024-11-20 15:19:18.458130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:17.679 [2024-11-20 15:19:18.458142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:26:17.679 [2024-11-20 15:19:18.458153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.679 [2024-11-20 15:19:18.458197] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:17.679 [2024-11-20 15:19:18.458225] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:17.679 [2024-11-20 15:19:18.458266] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:17.679 [2024-11-20 15:19:18.458292] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:17.679 [2024-11-20 15:19:18.458395] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:17.679 [2024-11-20 15:19:18.458410] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:17.679 [2024-11-20 15:19:18.458425] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:17.679 [2024-11-20 15:19:18.458440] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:17.679 [2024-11-20 15:19:18.458454] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:17.679 [2024-11-20 15:19:18.458467] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:17.679 [2024-11-20 15:19:18.458478] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:17.679 [2024-11-20 15:19:18.458489] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:17.679 [2024-11-20 15:19:18.458504] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:17.679 [2024-11-20 15:19:18.458516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.679 [2024-11-20 15:19:18.458527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:17.679 [2024-11-20 15:19:18.458540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.323 ms 00:26:17.679 [2024-11-20 15:19:18.458550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.679 [2024-11-20 15:19:18.458628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.679 [2024-11-20 15:19:18.458640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:17.679 [2024-11-20 15:19:18.458652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:26:17.679 [2024-11-20 15:19:18.458663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.679 [2024-11-20 15:19:18.458798] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:17.679 [2024-11-20 15:19:18.458814] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:17.679 [2024-11-20 15:19:18.458825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:17.679 [2024-11-20 15:19:18.458853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:17.679 [2024-11-20 15:19:18.458865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:17.679 [2024-11-20 15:19:18.458876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:17.679 [2024-11-20 15:19:18.458887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:17.679 [2024-11-20 15:19:18.458898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:17.679 [2024-11-20 15:19:18.458909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:17.679 [2024-11-20 15:19:18.458931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:17.679 [2024-11-20 15:19:18.458942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:17.679 [2024-11-20 15:19:18.458952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:17.679 [2024-11-20 15:19:18.458962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:17.679 [2024-11-20 15:19:18.458972] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:17.679 [2024-11-20 15:19:18.458998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:17.679 [2024-11-20 15:19:18.459019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:17.679 [2024-11-20 15:19:18.459041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:17.679 [2024-11-20 15:19:18.459050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:17.679 [2024-11-20 15:19:18.459060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:17.679 [2024-11-20 15:19:18.459069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:17.679 [2024-11-20 15:19:18.459079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:17.679 [2024-11-20 15:19:18.459105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:17.679 [2024-11-20 15:19:18.459115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:17.679 [2024-11-20 15:19:18.459125] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:17.679 [2024-11-20 15:19:18.459134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:17.679 [2024-11-20 15:19:18.459144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:17.679 [2024-11-20 15:19:18.459155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:17.679 [2024-11-20 15:19:18.459165] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:17.680 [2024-11-20 15:19:18.459174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:17.680 [2024-11-20 15:19:18.459184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:17.680 [2024-11-20 15:19:18.459194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:17.680 [2024-11-20 15:19:18.459204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:17.680 [2024-11-20 15:19:18.459214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:17.680 [2024-11-20 15:19:18.459224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:17.680 [2024-11-20 15:19:18.459234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:17.680 [2024-11-20 15:19:18.459244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:17.680 [2024-11-20 15:19:18.459253] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:17.680 [2024-11-20 15:19:18.459263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:17.680 [2024-11-20 15:19:18.459273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:17.680 [2024-11-20 15:19:18.459282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:17.680 [2024-11-20 15:19:18.459292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:17.680 [2024-11-20 15:19:18.459301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:17.680 [2024-11-20 15:19:18.459315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:17.680 [2024-11-20 15:19:18.459324] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:17.680 [2024-11-20 15:19:18.459335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:17.680 [2024-11-20 15:19:18.459347] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:17.680 [2024-11-20 15:19:18.459357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:17.680 [2024-11-20 15:19:18.459369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:17.680 [2024-11-20 15:19:18.459379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:17.680 [2024-11-20 15:19:18.459389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:17.680 [2024-11-20 15:19:18.459399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:17.680 [2024-11-20 15:19:18.459409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:17.680 [2024-11-20 15:19:18.459419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:17.680 [2024-11-20 15:19:18.459431] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:17.680 [2024-11-20 15:19:18.459445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:17.680 [2024-11-20 15:19:18.459470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:17.680 [2024-11-20 15:19:18.459481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:17.680 [2024-11-20 15:19:18.459491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:17.680 [2024-11-20 15:19:18.459501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:17.680 [2024-11-20 15:19:18.459512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:17.680 [2024-11-20 15:19:18.459522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:17.680 [2024-11-20 15:19:18.459532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:17.680 [2024-11-20 15:19:18.459542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:17.680 [2024-11-20 15:19:18.459553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:17.680 [2024-11-20 15:19:18.459563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:17.680 [2024-11-20 15:19:18.459573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:17.680 [2024-11-20 15:19:18.459584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:17.680 [2024-11-20 15:19:18.459594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:17.680 [2024-11-20 15:19:18.459605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:17.680 [2024-11-20 15:19:18.459615] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:17.680 [2024-11-20 15:19:18.459630] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:17.680 [2024-11-20 15:19:18.459643] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:17.680 [2024-11-20 15:19:18.459653] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:17.680 [2024-11-20 15:19:18.459664] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:17.680 [2024-11-20 15:19:18.459675] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:17.680 [2024-11-20 15:19:18.459687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.680 [2024-11-20 15:19:18.459697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:17.680 [2024-11-20 15:19:18.459709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.971 ms 00:26:17.680 [2024-11-20 15:19:18.459719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.680 [2024-11-20 15:19:18.510545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.680 [2024-11-20 15:19:18.510607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:17.680 [2024-11-20 15:19:18.510627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.851 ms 00:26:17.680 [2024-11-20 15:19:18.510640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.680 [2024-11-20 15:19:18.510803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.680 [2024-11-20 15:19:18.510817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:17.680 [2024-11-20 15:19:18.510829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:26:17.680 [2024-11-20 15:19:18.510840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.939 [2024-11-20 15:19:18.574974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.939 [2024-11-20 15:19:18.575040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:17.939 [2024-11-20 15:19:18.575058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.109 ms 00:26:17.939 [2024-11-20 15:19:18.575070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.939 [2024-11-20 15:19:18.575150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.939 [2024-11-20 15:19:18.575162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:17.940 [2024-11-20 15:19:18.575179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:17.940 [2024-11-20 15:19:18.575190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.940 [2024-11-20 15:19:18.576014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.940 [2024-11-20 15:19:18.576035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:17.940 [2024-11-20 15:19:18.576046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.739 ms 00:26:17.940 [2024-11-20 15:19:18.576057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.940 [2024-11-20 15:19:18.576196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.940 [2024-11-20 15:19:18.576211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:17.940 [2024-11-20 15:19:18.576222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:26:17.940 [2024-11-20 15:19:18.576239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.940 [2024-11-20 15:19:18.599804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.940 [2024-11-20 15:19:18.599856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:17.940 [2024-11-20 15:19:18.599877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.578 ms 00:26:17.940 [2024-11-20 15:19:18.599889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.940 [2024-11-20 15:19:18.621229] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:26:17.940 [2024-11-20 15:19:18.621279] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:17.940 [2024-11-20 15:19:18.621298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.940 [2024-11-20 15:19:18.621310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:17.940 [2024-11-20 15:19:18.621325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.275 ms 00:26:17.940 [2024-11-20 15:19:18.621335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.940 [2024-11-20 15:19:18.653736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.940 [2024-11-20 15:19:18.653792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:17.940 [2024-11-20 15:19:18.653810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.394 ms 00:26:17.940 [2024-11-20 15:19:18.653822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.940 [2024-11-20 15:19:18.674333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.940 [2024-11-20 15:19:18.674390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:17.940 [2024-11-20 15:19:18.674406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.475 ms 00:26:17.940 [2024-11-20 15:19:18.674417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.940 [2024-11-20 15:19:18.694217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.940 [2024-11-20 15:19:18.694265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:17.940 [2024-11-20 15:19:18.694283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.779 ms 00:26:17.940 [2024-11-20 15:19:18.694294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.940 [2024-11-20 15:19:18.695291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.940 [2024-11-20 15:19:18.695325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:17.940 [2024-11-20 15:19:18.695340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.862 ms 00:26:17.940 [2024-11-20 15:19:18.695358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.199 [2024-11-20 15:19:18.804753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.199 [2024-11-20 15:19:18.804831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:18.199 [2024-11-20 15:19:18.804861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 109.538 ms 00:26:18.199 [2024-11-20 15:19:18.804873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.199 [2024-11-20 15:19:18.818921] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:18.199 [2024-11-20 15:19:18.824183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.199 [2024-11-20 15:19:18.824222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:18.199 [2024-11-20 15:19:18.824241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.243 ms 00:26:18.199 [2024-11-20 15:19:18.824253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.199 [2024-11-20 15:19:18.824407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.199 [2024-11-20 15:19:18.824422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:18.199 [2024-11-20 15:19:18.824435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:18.199 [2024-11-20 15:19:18.824451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.199 [2024-11-20 15:19:18.826826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.199 [2024-11-20 15:19:18.826877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:18.199 [2024-11-20 15:19:18.826891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.328 ms 00:26:18.199 [2024-11-20 15:19:18.826903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.199 [2024-11-20 15:19:18.826953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.199 [2024-11-20 15:19:18.826965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:18.199 [2024-11-20 15:19:18.826976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:18.199 [2024-11-20 15:19:18.826987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.199 [2024-11-20 15:19:18.827037] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:18.199 [2024-11-20 15:19:18.827050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.199 [2024-11-20 15:19:18.827062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:18.199 [2024-11-20 15:19:18.827073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:26:18.199 [2024-11-20 15:19:18.827083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.199 [2024-11-20 15:19:18.866975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.199 [2024-11-20 15:19:18.867021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:18.199 [2024-11-20 15:19:18.867038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.934 ms 00:26:18.199 [2024-11-20 15:19:18.867057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.199 [2024-11-20 15:19:18.867152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.199 [2024-11-20 15:19:18.867166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:18.199 [2024-11-20 15:19:18.867178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:26:18.199 [2024-11-20 15:19:18.867189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.199 [2024-11-20 15:19:18.868729] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 455.787 ms, result 0 00:26:19.582  [2024-11-20T15:19:21.355Z] Copying: 27/1024 [MB] (27 MBps) [2024-11-20T15:19:22.292Z] Copying: 57/1024 [MB] (29 MBps) [2024-11-20T15:19:23.228Z] Copying: 87/1024 [MB] (30 MBps) [2024-11-20T15:19:24.166Z] Copying: 117/1024 [MB] (29 MBps) [2024-11-20T15:19:25.102Z] Copying: 147/1024 [MB] (30 MBps) [2024-11-20T15:19:26.478Z] Copying: 175/1024 [MB] (27 MBps) [2024-11-20T15:19:27.413Z] Copying: 204/1024 [MB] (28 MBps) [2024-11-20T15:19:28.351Z] Copying: 232/1024 [MB] (27 MBps) [2024-11-20T15:19:29.287Z] Copying: 261/1024 [MB] (28 MBps) [2024-11-20T15:19:30.226Z] Copying: 289/1024 [MB] (28 MBps) [2024-11-20T15:19:31.162Z] Copying: 319/1024 [MB] (29 MBps) [2024-11-20T15:19:32.098Z] Copying: 349/1024 [MB] (30 MBps) [2024-11-20T15:19:33.477Z] Copying: 380/1024 [MB] (30 MBps) [2024-11-20T15:19:34.450Z] Copying: 411/1024 [MB] (30 MBps) [2024-11-20T15:19:35.387Z] Copying: 443/1024 [MB] (32 MBps) [2024-11-20T15:19:36.322Z] Copying: 472/1024 [MB] (29 MBps) [2024-11-20T15:19:37.259Z] Copying: 502/1024 [MB] (29 MBps) [2024-11-20T15:19:38.195Z] Copying: 530/1024 [MB] (28 MBps) [2024-11-20T15:19:39.132Z] Copying: 559/1024 [MB] (29 MBps) [2024-11-20T15:19:40.509Z] Copying: 587/1024 [MB] (28 MBps) [2024-11-20T15:19:41.077Z] Copying: 617/1024 [MB] (29 MBps) [2024-11-20T15:19:42.452Z] Copying: 645/1024 [MB] (28 MBps) [2024-11-20T15:19:43.388Z] Copying: 674/1024 [MB] (28 MBps) [2024-11-20T15:19:44.324Z] Copying: 703/1024 [MB] (29 MBps) [2024-11-20T15:19:45.260Z] Copying: 734/1024 [MB] (31 MBps) [2024-11-20T15:19:46.196Z] Copying: 764/1024 [MB] (29 MBps) [2024-11-20T15:19:47.144Z] Copying: 794/1024 [MB] (30 MBps) [2024-11-20T15:19:48.083Z] Copying: 823/1024 [MB] (29 MBps) [2024-11-20T15:19:49.456Z] Copying: 850/1024 [MB] (26 MBps) [2024-11-20T15:19:50.393Z] Copying: 877/1024 [MB] (27 MBps) [2024-11-20T15:19:51.331Z] Copying: 905/1024 [MB] (28 MBps) [2024-11-20T15:19:52.266Z] Copying: 934/1024 [MB] (28 MBps) [2024-11-20T15:19:53.203Z] Copying: 962/1024 [MB] (28 MBps) [2024-11-20T15:19:54.140Z] Copying: 990/1024 [MB] (28 MBps) [2024-11-20T15:19:54.399Z] Copying: 1018/1024 [MB] (27 MBps) [2024-11-20T15:19:54.657Z] Copying: 1024/1024 [MB] (average 29 MBps)[2024-11-20 15:19:54.427027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.821 [2024-11-20 15:19:54.427119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:53.821 [2024-11-20 15:19:54.427141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:53.821 [2024-11-20 15:19:54.427173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.821 [2024-11-20 15:19:54.427207] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:53.821 [2024-11-20 15:19:54.434222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.821 [2024-11-20 15:19:54.434277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:53.821 [2024-11-20 15:19:54.434297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.997 ms 00:26:53.821 [2024-11-20 15:19:54.434311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.821 [2024-11-20 15:19:54.434627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.821 [2024-11-20 15:19:54.434652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:53.821 [2024-11-20 15:19:54.434668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:26:53.821 [2024-11-20 15:19:54.434682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.821 [2024-11-20 15:19:54.439527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.821 [2024-11-20 15:19:54.439567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:53.821 [2024-11-20 15:19:54.439582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.822 ms 00:26:53.821 [2024-11-20 15:19:54.439593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.821 [2024-11-20 15:19:54.444763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.821 [2024-11-20 15:19:54.444799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:53.821 [2024-11-20 15:19:54.444812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.140 ms 00:26:53.821 [2024-11-20 15:19:54.444824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.821 [2024-11-20 15:19:54.486295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.821 [2024-11-20 15:19:54.486373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:53.821 [2024-11-20 15:19:54.486393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.474 ms 00:26:53.821 [2024-11-20 15:19:54.486404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.822 [2024-11-20 15:19:54.508691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.822 [2024-11-20 15:19:54.508757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:53.822 [2024-11-20 15:19:54.508773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.267 ms 00:26:53.822 [2024-11-20 15:19:54.508784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.822 [2024-11-20 15:19:54.632834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.822 [2024-11-20 15:19:54.632960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:53.822 [2024-11-20 15:19:54.632982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 124.195 ms 00:26:53.822 [2024-11-20 15:19:54.632995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.081 [2024-11-20 15:19:54.674733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:54.081 [2024-11-20 15:19:54.674816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:54.081 [2024-11-20 15:19:54.674838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.782 ms 00:26:54.081 [2024-11-20 15:19:54.674850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.081 [2024-11-20 15:19:54.713760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:54.081 [2024-11-20 15:19:54.713834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:54.081 [2024-11-20 15:19:54.713871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.909 ms 00:26:54.081 [2024-11-20 15:19:54.713882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.081 [2024-11-20 15:19:54.750740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:54.081 [2024-11-20 15:19:54.750812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:54.081 [2024-11-20 15:19:54.750831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.862 ms 00:26:54.081 [2024-11-20 15:19:54.750842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.082 [2024-11-20 15:19:54.789593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:54.082 [2024-11-20 15:19:54.789713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:54.082 [2024-11-20 15:19:54.789746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.701 ms 00:26:54.082 [2024-11-20 15:19:54.789759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.082 [2024-11-20 15:19:54.789815] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:54.082 [2024-11-20 15:19:54.789838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:26:54.082 [2024-11-20 15:19:54.789854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.789866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.789878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.789890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.789902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.789914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.789927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.789938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.789949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.789961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.789972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.789984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.789996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:54.082 [2024-11-20 15:19:54.790822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:54.083 [2024-11-20 15:19:54.790833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:54.083 [2024-11-20 15:19:54.790844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:54.083 [2024-11-20 15:19:54.790856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:54.083 [2024-11-20 15:19:54.790867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:54.083 [2024-11-20 15:19:54.790879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:54.083 [2024-11-20 15:19:54.790890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:54.083 [2024-11-20 15:19:54.790901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:54.083 [2024-11-20 15:19:54.790913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:54.083 [2024-11-20 15:19:54.790925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:54.083 [2024-11-20 15:19:54.790937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:54.083 [2024-11-20 15:19:54.790948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:54.083 [2024-11-20 15:19:54.790960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:54.083 [2024-11-20 15:19:54.790971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:54.083 [2024-11-20 15:19:54.790983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:54.083 [2024-11-20 15:19:54.790993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:54.083 [2024-11-20 15:19:54.791004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:54.083 [2024-11-20 15:19:54.791015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:54.083 [2024-11-20 15:19:54.791034] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:54.083 [2024-11-20 15:19:54.791045] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7d8770c9-24dd-42ab-a5d0-936f5b553fe3 00:26:54.083 [2024-11-20 15:19:54.791056] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:26:54.083 [2024-11-20 15:19:54.791067] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 13504 00:26:54.083 [2024-11-20 15:19:54.791077] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 12544 00:26:54.083 [2024-11-20 15:19:54.791088] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0765 00:26:54.083 [2024-11-20 15:19:54.791099] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:54.083 [2024-11-20 15:19:54.791117] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:54.083 [2024-11-20 15:19:54.791128] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:54.083 [2024-11-20 15:19:54.791150] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:54.083 [2024-11-20 15:19:54.791162] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:54.083 [2024-11-20 15:19:54.791173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:54.083 [2024-11-20 15:19:54.791184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:54.083 [2024-11-20 15:19:54.791195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.362 ms 00:26:54.083 [2024-11-20 15:19:54.791205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.083 [2024-11-20 15:19:54.812415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:54.083 [2024-11-20 15:19:54.812461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:54.083 [2024-11-20 15:19:54.812477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.197 ms 00:26:54.083 [2024-11-20 15:19:54.812495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.083 [2024-11-20 15:19:54.813144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:54.083 [2024-11-20 15:19:54.813166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:54.083 [2024-11-20 15:19:54.813178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.620 ms 00:26:54.083 [2024-11-20 15:19:54.813188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.083 [2024-11-20 15:19:54.869580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:54.083 [2024-11-20 15:19:54.869692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:54.083 [2024-11-20 15:19:54.869711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:54.083 [2024-11-20 15:19:54.869732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.083 [2024-11-20 15:19:54.869834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:54.083 [2024-11-20 15:19:54.869847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:54.083 [2024-11-20 15:19:54.869859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:54.083 [2024-11-20 15:19:54.869870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.083 [2024-11-20 15:19:54.870009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:54.083 [2024-11-20 15:19:54.870024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:54.083 [2024-11-20 15:19:54.870041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:54.083 [2024-11-20 15:19:54.870052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.083 [2024-11-20 15:19:54.870072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:54.083 [2024-11-20 15:19:54.870084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:54.083 [2024-11-20 15:19:54.870096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:54.083 [2024-11-20 15:19:54.870106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.342 [2024-11-20 15:19:55.009633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:54.342 [2024-11-20 15:19:55.009725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:54.342 [2024-11-20 15:19:55.009754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:54.342 [2024-11-20 15:19:55.009767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.342 [2024-11-20 15:19:55.122045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:54.342 [2024-11-20 15:19:55.122115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:54.342 [2024-11-20 15:19:55.122134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:54.342 [2024-11-20 15:19:55.122147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.342 [2024-11-20 15:19:55.122283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:54.342 [2024-11-20 15:19:55.122297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:54.342 [2024-11-20 15:19:55.122310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:54.342 [2024-11-20 15:19:55.122326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.342 [2024-11-20 15:19:55.122381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:54.342 [2024-11-20 15:19:55.122393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:54.342 [2024-11-20 15:19:55.122405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:54.342 [2024-11-20 15:19:55.122416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.342 [2024-11-20 15:19:55.122551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:54.342 [2024-11-20 15:19:55.122565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:54.342 [2024-11-20 15:19:55.122578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:54.342 [2024-11-20 15:19:55.122588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.342 [2024-11-20 15:19:55.122633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:54.342 [2024-11-20 15:19:55.122646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:54.342 [2024-11-20 15:19:55.122658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:54.342 [2024-11-20 15:19:55.122669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.342 [2024-11-20 15:19:55.122716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:54.342 [2024-11-20 15:19:55.122742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:54.342 [2024-11-20 15:19:55.122753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:54.342 [2024-11-20 15:19:55.122764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.342 [2024-11-20 15:19:55.122820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:54.342 [2024-11-20 15:19:55.122834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:54.342 [2024-11-20 15:19:55.122846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:54.342 [2024-11-20 15:19:55.122856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:54.342 [2024-11-20 15:19:55.123008] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 697.065 ms, result 0 00:26:55.720 00:26:55.721 00:26:55.721 15:19:56 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:57.624 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:26:57.624 15:19:58 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:26:57.624 15:19:58 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:26:57.624 15:19:58 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:57.624 15:19:58 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:57.624 15:19:58 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:57.624 Process with pid 79337 is not found 00:26:57.624 Remove shared memory files 00:26:57.624 15:19:58 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 79337 00:26:57.624 15:19:58 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79337 ']' 00:26:57.624 15:19:58 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79337 00:26:57.624 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79337) - No such process 00:26:57.624 15:19:58 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 79337 is not found' 00:26:57.624 15:19:58 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:26:57.624 15:19:58 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:57.624 15:19:58 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:26:57.624 15:19:58 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:26:57.624 15:19:58 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:26:57.624 15:19:58 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:57.624 15:19:58 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:26:57.624 ************************************ 00:26:57.624 END TEST ftl_restore 00:26:57.624 ************************************ 00:26:57.624 00:26:57.624 real 3m8.243s 00:26:57.624 user 2m53.680s 00:26:57.624 sys 0m15.559s 00:26:57.624 15:19:58 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:57.624 15:19:58 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:26:57.624 15:19:58 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:26:57.624 15:19:58 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:26:57.624 15:19:58 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:57.624 15:19:58 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:57.624 ************************************ 00:26:57.624 START TEST ftl_dirty_shutdown 00:26:57.624 ************************************ 00:26:57.624 15:19:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:26:57.884 * Looking for test storage... 00:26:57.884 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:57.884 15:19:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:57.884 15:19:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:26:57.884 15:19:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:57.884 15:19:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:57.884 15:19:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:57.884 15:19:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:57.884 15:19:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:57.884 15:19:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:26:57.884 15:19:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:57.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:57.885 --rc genhtml_branch_coverage=1 00:26:57.885 --rc genhtml_function_coverage=1 00:26:57.885 --rc genhtml_legend=1 00:26:57.885 --rc geninfo_all_blocks=1 00:26:57.885 --rc geninfo_unexecuted_blocks=1 00:26:57.885 00:26:57.885 ' 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:57.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:57.885 --rc genhtml_branch_coverage=1 00:26:57.885 --rc genhtml_function_coverage=1 00:26:57.885 --rc genhtml_legend=1 00:26:57.885 --rc geninfo_all_blocks=1 00:26:57.885 --rc geninfo_unexecuted_blocks=1 00:26:57.885 00:26:57.885 ' 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:57.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:57.885 --rc genhtml_branch_coverage=1 00:26:57.885 --rc genhtml_function_coverage=1 00:26:57.885 --rc genhtml_legend=1 00:26:57.885 --rc geninfo_all_blocks=1 00:26:57.885 --rc geninfo_unexecuted_blocks=1 00:26:57.885 00:26:57.885 ' 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:57.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:57.885 --rc genhtml_branch_coverage=1 00:26:57.885 --rc genhtml_function_coverage=1 00:26:57.885 --rc genhtml_legend=1 00:26:57.885 --rc geninfo_all_blocks=1 00:26:57.885 --rc geninfo_unexecuted_blocks=1 00:26:57.885 00:26:57.885 ' 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=81303 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 81303 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81303 ']' 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:57.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:57.885 15:19:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:58.143 [2024-11-20 15:19:58.764483] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:26:58.143 [2024-11-20 15:19:58.764642] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81303 ] 00:26:58.143 [2024-11-20 15:19:58.947412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:58.401 [2024-11-20 15:19:59.097129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:59.335 15:20:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:59.335 15:20:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:26:59.335 15:20:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:26:59.335 15:20:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:26:59.335 15:20:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:26:59.335 15:20:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:26:59.335 15:20:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:26:59.335 15:20:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:26:59.900 15:20:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:26:59.900 15:20:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:26:59.900 15:20:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:26:59.900 15:20:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:26:59.900 15:20:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:59.900 15:20:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:59.900 15:20:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:59.900 15:20:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:27:00.159 15:20:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:00.159 { 00:27:00.159 "name": "nvme0n1", 00:27:00.159 "aliases": [ 00:27:00.159 "17910dc1-e7ab-4363-9ab6-fb2fde1e7b44" 00:27:00.159 ], 00:27:00.159 "product_name": "NVMe disk", 00:27:00.159 "block_size": 4096, 00:27:00.159 "num_blocks": 1310720, 00:27:00.159 "uuid": "17910dc1-e7ab-4363-9ab6-fb2fde1e7b44", 00:27:00.159 "numa_id": -1, 00:27:00.159 "assigned_rate_limits": { 00:27:00.159 "rw_ios_per_sec": 0, 00:27:00.159 "rw_mbytes_per_sec": 0, 00:27:00.159 "r_mbytes_per_sec": 0, 00:27:00.159 "w_mbytes_per_sec": 0 00:27:00.159 }, 00:27:00.159 "claimed": true, 00:27:00.159 "claim_type": "read_many_write_one", 00:27:00.159 "zoned": false, 00:27:00.159 "supported_io_types": { 00:27:00.159 "read": true, 00:27:00.159 "write": true, 00:27:00.159 "unmap": true, 00:27:00.159 "flush": true, 00:27:00.159 "reset": true, 00:27:00.159 "nvme_admin": true, 00:27:00.159 "nvme_io": true, 00:27:00.159 "nvme_io_md": false, 00:27:00.159 "write_zeroes": true, 00:27:00.159 "zcopy": false, 00:27:00.159 "get_zone_info": false, 00:27:00.159 "zone_management": false, 00:27:00.159 "zone_append": false, 00:27:00.159 "compare": true, 00:27:00.159 "compare_and_write": false, 00:27:00.159 "abort": true, 00:27:00.159 "seek_hole": false, 00:27:00.159 "seek_data": false, 00:27:00.159 "copy": true, 00:27:00.159 "nvme_iov_md": false 00:27:00.159 }, 00:27:00.159 "driver_specific": { 00:27:00.159 "nvme": [ 00:27:00.159 { 00:27:00.159 "pci_address": "0000:00:11.0", 00:27:00.159 "trid": { 00:27:00.159 "trtype": "PCIe", 00:27:00.159 "traddr": "0000:00:11.0" 00:27:00.159 }, 00:27:00.159 "ctrlr_data": { 00:27:00.159 "cntlid": 0, 00:27:00.159 "vendor_id": "0x1b36", 00:27:00.159 "model_number": "QEMU NVMe Ctrl", 00:27:00.159 "serial_number": "12341", 00:27:00.159 "firmware_revision": "8.0.0", 00:27:00.159 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:00.159 "oacs": { 00:27:00.159 "security": 0, 00:27:00.159 "format": 1, 00:27:00.159 "firmware": 0, 00:27:00.159 "ns_manage": 1 00:27:00.159 }, 00:27:00.159 "multi_ctrlr": false, 00:27:00.159 "ana_reporting": false 00:27:00.159 }, 00:27:00.159 "vs": { 00:27:00.159 "nvme_version": "1.4" 00:27:00.159 }, 00:27:00.159 "ns_data": { 00:27:00.159 "id": 1, 00:27:00.159 "can_share": false 00:27:00.159 } 00:27:00.159 } 00:27:00.159 ], 00:27:00.159 "mp_policy": "active_passive" 00:27:00.159 } 00:27:00.159 } 00:27:00.159 ]' 00:27:00.159 15:20:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:00.159 15:20:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:00.159 15:20:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:00.159 15:20:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:27:00.159 15:20:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:27:00.159 15:20:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:27:00.159 15:20:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:27:00.159 15:20:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:27:00.159 15:20:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:27:00.159 15:20:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:00.159 15:20:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:00.417 15:20:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=ae23ae9c-c3cd-49de-b25c-d81dde85780d 00:27:00.417 15:20:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:27:00.417 15:20:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ae23ae9c-c3cd-49de-b25c-d81dde85780d 00:27:00.674 15:20:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:27:00.932 15:20:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=5d8663cd-5546-4903-b95c-c784edb91d06 00:27:00.932 15:20:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 5d8663cd-5546-4903-b95c-c784edb91d06 00:27:00.932 15:20:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=12c6732f-3a3e-45d3-b93c-fb1585b649cb 00:27:00.932 15:20:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:27:00.932 15:20:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 12c6732f-3a3e-45d3-b93c-fb1585b649cb 00:27:00.932 15:20:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:27:00.932 15:20:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:00.932 15:20:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=12c6732f-3a3e-45d3-b93c-fb1585b649cb 00:27:00.932 15:20:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:27:00.932 15:20:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 12c6732f-3a3e-45d3-b93c-fb1585b649cb 00:27:00.932 15:20:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=12c6732f-3a3e-45d3-b93c-fb1585b649cb 00:27:00.932 15:20:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:00.932 15:20:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:00.932 15:20:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:01.190 15:20:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 12c6732f-3a3e-45d3-b93c-fb1585b649cb 00:27:01.190 15:20:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:01.190 { 00:27:01.190 "name": "12c6732f-3a3e-45d3-b93c-fb1585b649cb", 00:27:01.190 "aliases": [ 00:27:01.190 "lvs/nvme0n1p0" 00:27:01.190 ], 00:27:01.190 "product_name": "Logical Volume", 00:27:01.190 "block_size": 4096, 00:27:01.190 "num_blocks": 26476544, 00:27:01.190 "uuid": "12c6732f-3a3e-45d3-b93c-fb1585b649cb", 00:27:01.190 "assigned_rate_limits": { 00:27:01.190 "rw_ios_per_sec": 0, 00:27:01.190 "rw_mbytes_per_sec": 0, 00:27:01.190 "r_mbytes_per_sec": 0, 00:27:01.190 "w_mbytes_per_sec": 0 00:27:01.190 }, 00:27:01.190 "claimed": false, 00:27:01.190 "zoned": false, 00:27:01.190 "supported_io_types": { 00:27:01.190 "read": true, 00:27:01.190 "write": true, 00:27:01.190 "unmap": true, 00:27:01.190 "flush": false, 00:27:01.190 "reset": true, 00:27:01.190 "nvme_admin": false, 00:27:01.190 "nvme_io": false, 00:27:01.190 "nvme_io_md": false, 00:27:01.190 "write_zeroes": true, 00:27:01.190 "zcopy": false, 00:27:01.190 "get_zone_info": false, 00:27:01.190 "zone_management": false, 00:27:01.190 "zone_append": false, 00:27:01.190 "compare": false, 00:27:01.190 "compare_and_write": false, 00:27:01.190 "abort": false, 00:27:01.190 "seek_hole": true, 00:27:01.190 "seek_data": true, 00:27:01.190 "copy": false, 00:27:01.190 "nvme_iov_md": false 00:27:01.190 }, 00:27:01.190 "driver_specific": { 00:27:01.190 "lvol": { 00:27:01.190 "lvol_store_uuid": "5d8663cd-5546-4903-b95c-c784edb91d06", 00:27:01.190 "base_bdev": "nvme0n1", 00:27:01.190 "thin_provision": true, 00:27:01.190 "num_allocated_clusters": 0, 00:27:01.190 "snapshot": false, 00:27:01.190 "clone": false, 00:27:01.190 "esnap_clone": false 00:27:01.190 } 00:27:01.190 } 00:27:01.190 } 00:27:01.190 ]' 00:27:01.190 15:20:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:01.448 15:20:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:01.448 15:20:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:01.448 15:20:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:01.449 15:20:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:01.449 15:20:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:27:01.449 15:20:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:27:01.449 15:20:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:27:01.449 15:20:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:27:01.707 15:20:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:27:01.707 15:20:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:27:01.707 15:20:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 12c6732f-3a3e-45d3-b93c-fb1585b649cb 00:27:01.707 15:20:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=12c6732f-3a3e-45d3-b93c-fb1585b649cb 00:27:01.707 15:20:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:01.707 15:20:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:01.707 15:20:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:01.707 15:20:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 12c6732f-3a3e-45d3-b93c-fb1585b649cb 00:27:01.965 15:20:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:01.965 { 00:27:01.965 "name": "12c6732f-3a3e-45d3-b93c-fb1585b649cb", 00:27:01.965 "aliases": [ 00:27:01.965 "lvs/nvme0n1p0" 00:27:01.965 ], 00:27:01.965 "product_name": "Logical Volume", 00:27:01.965 "block_size": 4096, 00:27:01.965 "num_blocks": 26476544, 00:27:01.965 "uuid": "12c6732f-3a3e-45d3-b93c-fb1585b649cb", 00:27:01.965 "assigned_rate_limits": { 00:27:01.965 "rw_ios_per_sec": 0, 00:27:01.965 "rw_mbytes_per_sec": 0, 00:27:01.965 "r_mbytes_per_sec": 0, 00:27:01.965 "w_mbytes_per_sec": 0 00:27:01.965 }, 00:27:01.965 "claimed": false, 00:27:01.965 "zoned": false, 00:27:01.965 "supported_io_types": { 00:27:01.965 "read": true, 00:27:01.965 "write": true, 00:27:01.965 "unmap": true, 00:27:01.965 "flush": false, 00:27:01.965 "reset": true, 00:27:01.965 "nvme_admin": false, 00:27:01.965 "nvme_io": false, 00:27:01.965 "nvme_io_md": false, 00:27:01.965 "write_zeroes": true, 00:27:01.965 "zcopy": false, 00:27:01.965 "get_zone_info": false, 00:27:01.965 "zone_management": false, 00:27:01.965 "zone_append": false, 00:27:01.965 "compare": false, 00:27:01.965 "compare_and_write": false, 00:27:01.965 "abort": false, 00:27:01.965 "seek_hole": true, 00:27:01.965 "seek_data": true, 00:27:01.965 "copy": false, 00:27:01.965 "nvme_iov_md": false 00:27:01.965 }, 00:27:01.965 "driver_specific": { 00:27:01.965 "lvol": { 00:27:01.965 "lvol_store_uuid": "5d8663cd-5546-4903-b95c-c784edb91d06", 00:27:01.965 "base_bdev": "nvme0n1", 00:27:01.965 "thin_provision": true, 00:27:01.965 "num_allocated_clusters": 0, 00:27:01.965 "snapshot": false, 00:27:01.965 "clone": false, 00:27:01.965 "esnap_clone": false 00:27:01.965 } 00:27:01.965 } 00:27:01.965 } 00:27:01.965 ]' 00:27:01.965 15:20:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:01.965 15:20:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:01.965 15:20:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:01.965 15:20:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:01.965 15:20:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:01.965 15:20:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:27:01.965 15:20:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:27:01.965 15:20:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:27:02.223 15:20:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:27:02.223 15:20:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 12c6732f-3a3e-45d3-b93c-fb1585b649cb 00:27:02.223 15:20:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=12c6732f-3a3e-45d3-b93c-fb1585b649cb 00:27:02.223 15:20:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:02.223 15:20:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:02.223 15:20:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:02.223 15:20:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 12c6732f-3a3e-45d3-b93c-fb1585b649cb 00:27:02.482 15:20:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:02.482 { 00:27:02.482 "name": "12c6732f-3a3e-45d3-b93c-fb1585b649cb", 00:27:02.482 "aliases": [ 00:27:02.482 "lvs/nvme0n1p0" 00:27:02.482 ], 00:27:02.482 "product_name": "Logical Volume", 00:27:02.482 "block_size": 4096, 00:27:02.482 "num_blocks": 26476544, 00:27:02.482 "uuid": "12c6732f-3a3e-45d3-b93c-fb1585b649cb", 00:27:02.482 "assigned_rate_limits": { 00:27:02.482 "rw_ios_per_sec": 0, 00:27:02.482 "rw_mbytes_per_sec": 0, 00:27:02.482 "r_mbytes_per_sec": 0, 00:27:02.482 "w_mbytes_per_sec": 0 00:27:02.482 }, 00:27:02.482 "claimed": false, 00:27:02.482 "zoned": false, 00:27:02.482 "supported_io_types": { 00:27:02.482 "read": true, 00:27:02.482 "write": true, 00:27:02.482 "unmap": true, 00:27:02.482 "flush": false, 00:27:02.482 "reset": true, 00:27:02.482 "nvme_admin": false, 00:27:02.482 "nvme_io": false, 00:27:02.482 "nvme_io_md": false, 00:27:02.482 "write_zeroes": true, 00:27:02.482 "zcopy": false, 00:27:02.482 "get_zone_info": false, 00:27:02.482 "zone_management": false, 00:27:02.482 "zone_append": false, 00:27:02.482 "compare": false, 00:27:02.482 "compare_and_write": false, 00:27:02.482 "abort": false, 00:27:02.482 "seek_hole": true, 00:27:02.482 "seek_data": true, 00:27:02.482 "copy": false, 00:27:02.482 "nvme_iov_md": false 00:27:02.482 }, 00:27:02.482 "driver_specific": { 00:27:02.482 "lvol": { 00:27:02.482 "lvol_store_uuid": "5d8663cd-5546-4903-b95c-c784edb91d06", 00:27:02.482 "base_bdev": "nvme0n1", 00:27:02.482 "thin_provision": true, 00:27:02.482 "num_allocated_clusters": 0, 00:27:02.482 "snapshot": false, 00:27:02.482 "clone": false, 00:27:02.482 "esnap_clone": false 00:27:02.482 } 00:27:02.482 } 00:27:02.482 } 00:27:02.482 ]' 00:27:02.482 15:20:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:02.482 15:20:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:02.482 15:20:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:02.482 15:20:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:02.482 15:20:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:02.482 15:20:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:27:02.482 15:20:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:27:02.482 15:20:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 12c6732f-3a3e-45d3-b93c-fb1585b649cb --l2p_dram_limit 10' 00:27:02.482 15:20:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:27:02.482 15:20:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:27:02.483 15:20:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:27:02.483 15:20:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 12c6732f-3a3e-45d3-b93c-fb1585b649cb --l2p_dram_limit 10 -c nvc0n1p0 00:27:02.742 [2024-11-20 15:20:03.407039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.742 [2024-11-20 15:20:03.407114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:02.742 [2024-11-20 15:20:03.407138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:02.742 [2024-11-20 15:20:03.407150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.742 [2024-11-20 15:20:03.407252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.742 [2024-11-20 15:20:03.407267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:02.742 [2024-11-20 15:20:03.407282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:27:02.742 [2024-11-20 15:20:03.407293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.742 [2024-11-20 15:20:03.407319] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:02.742 [2024-11-20 15:20:03.408473] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:02.742 [2024-11-20 15:20:03.408518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.742 [2024-11-20 15:20:03.408529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:02.742 [2024-11-20 15:20:03.408545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.201 ms 00:27:02.742 [2024-11-20 15:20:03.408556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.742 [2024-11-20 15:20:03.408652] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 449efaf6-bbc5-4a3f-99c9-acfa73fd2d6c 00:27:02.742 [2024-11-20 15:20:03.411112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.742 [2024-11-20 15:20:03.411157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:27:02.742 [2024-11-20 15:20:03.411170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:27:02.742 [2024-11-20 15:20:03.411184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.742 [2024-11-20 15:20:03.425260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.742 [2024-11-20 15:20:03.425489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:02.742 [2024-11-20 15:20:03.425515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.029 ms 00:27:02.742 [2024-11-20 15:20:03.425529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.742 [2024-11-20 15:20:03.425670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.742 [2024-11-20 15:20:03.425687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:02.742 [2024-11-20 15:20:03.425700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:27:02.742 [2024-11-20 15:20:03.425757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.742 [2024-11-20 15:20:03.425865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.742 [2024-11-20 15:20:03.425883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:02.742 [2024-11-20 15:20:03.425895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:02.742 [2024-11-20 15:20:03.425914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.742 [2024-11-20 15:20:03.425946] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:02.742 [2024-11-20 15:20:03.432060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.742 [2024-11-20 15:20:03.432203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:02.742 [2024-11-20 15:20:03.432232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.131 ms 00:27:02.742 [2024-11-20 15:20:03.432244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.742 [2024-11-20 15:20:03.432289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.742 [2024-11-20 15:20:03.432300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:02.742 [2024-11-20 15:20:03.432314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:02.742 [2024-11-20 15:20:03.432326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.742 [2024-11-20 15:20:03.432367] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:27:02.742 [2024-11-20 15:20:03.432517] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:02.742 [2024-11-20 15:20:03.432539] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:02.742 [2024-11-20 15:20:03.432554] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:02.742 [2024-11-20 15:20:03.432571] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:02.742 [2024-11-20 15:20:03.432584] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:02.742 [2024-11-20 15:20:03.432599] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:02.742 [2024-11-20 15:20:03.432609] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:02.742 [2024-11-20 15:20:03.432626] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:02.742 [2024-11-20 15:20:03.432637] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:02.742 [2024-11-20 15:20:03.432651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.742 [2024-11-20 15:20:03.432662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:02.742 [2024-11-20 15:20:03.432675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:27:02.742 [2024-11-20 15:20:03.432697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.742 [2024-11-20 15:20:03.432792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.742 [2024-11-20 15:20:03.432805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:02.742 [2024-11-20 15:20:03.432819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:27:02.742 [2024-11-20 15:20:03.432829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.742 [2024-11-20 15:20:03.432936] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:02.742 [2024-11-20 15:20:03.432949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:02.742 [2024-11-20 15:20:03.432964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:02.743 [2024-11-20 15:20:03.432975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:02.743 [2024-11-20 15:20:03.432989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:02.743 [2024-11-20 15:20:03.432998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:02.743 [2024-11-20 15:20:03.433011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:02.743 [2024-11-20 15:20:03.433020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:02.743 [2024-11-20 15:20:03.433033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:02.743 [2024-11-20 15:20:03.433042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:02.743 [2024-11-20 15:20:03.433054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:02.743 [2024-11-20 15:20:03.433063] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:02.743 [2024-11-20 15:20:03.433076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:02.743 [2024-11-20 15:20:03.433086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:02.743 [2024-11-20 15:20:03.433098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:02.743 [2024-11-20 15:20:03.433107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:02.743 [2024-11-20 15:20:03.433122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:02.743 [2024-11-20 15:20:03.433131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:02.743 [2024-11-20 15:20:03.433145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:02.743 [2024-11-20 15:20:03.433154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:02.743 [2024-11-20 15:20:03.433166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:02.743 [2024-11-20 15:20:03.433177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:02.743 [2024-11-20 15:20:03.433190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:02.743 [2024-11-20 15:20:03.433199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:02.743 [2024-11-20 15:20:03.433211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:02.743 [2024-11-20 15:20:03.433220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:02.743 [2024-11-20 15:20:03.433232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:02.743 [2024-11-20 15:20:03.433241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:02.743 [2024-11-20 15:20:03.433253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:02.743 [2024-11-20 15:20:03.433263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:02.743 [2024-11-20 15:20:03.433275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:02.743 [2024-11-20 15:20:03.433284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:02.743 [2024-11-20 15:20:03.433300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:02.743 [2024-11-20 15:20:03.433309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:02.743 [2024-11-20 15:20:03.433321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:02.743 [2024-11-20 15:20:03.433330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:02.743 [2024-11-20 15:20:03.433342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:02.743 [2024-11-20 15:20:03.433351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:02.743 [2024-11-20 15:20:03.433364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:02.743 [2024-11-20 15:20:03.433374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:02.743 [2024-11-20 15:20:03.433385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:02.743 [2024-11-20 15:20:03.433394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:02.743 [2024-11-20 15:20:03.433407] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:02.743 [2024-11-20 15:20:03.433415] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:02.743 [2024-11-20 15:20:03.433429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:02.743 [2024-11-20 15:20:03.433439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:02.743 [2024-11-20 15:20:03.433454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:02.743 [2024-11-20 15:20:03.433465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:02.743 [2024-11-20 15:20:03.433480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:02.743 [2024-11-20 15:20:03.433490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:02.743 [2024-11-20 15:20:03.433503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:02.743 [2024-11-20 15:20:03.433512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:02.743 [2024-11-20 15:20:03.433524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:02.743 [2024-11-20 15:20:03.433539] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:02.743 [2024-11-20 15:20:03.433556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:02.743 [2024-11-20 15:20:03.433572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:02.743 [2024-11-20 15:20:03.433586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:02.743 [2024-11-20 15:20:03.433604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:02.743 [2024-11-20 15:20:03.433618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:02.743 [2024-11-20 15:20:03.433628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:02.743 [2024-11-20 15:20:03.433642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:02.743 [2024-11-20 15:20:03.433652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:02.743 [2024-11-20 15:20:03.433665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:02.743 [2024-11-20 15:20:03.433676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:02.743 [2024-11-20 15:20:03.433692] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:02.743 [2024-11-20 15:20:03.433703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:02.743 [2024-11-20 15:20:03.433716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:02.743 [2024-11-20 15:20:03.433736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:02.743 [2024-11-20 15:20:03.433751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:02.743 [2024-11-20 15:20:03.433762] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:02.743 [2024-11-20 15:20:03.433777] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:02.743 [2024-11-20 15:20:03.433789] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:02.743 [2024-11-20 15:20:03.433802] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:02.743 [2024-11-20 15:20:03.433812] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:02.743 [2024-11-20 15:20:03.433827] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:02.743 [2024-11-20 15:20:03.433838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.743 [2024-11-20 15:20:03.433851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:02.743 [2024-11-20 15:20:03.433862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.965 ms 00:27:02.743 [2024-11-20 15:20:03.433876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.743 [2024-11-20 15:20:03.433920] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:27:02.743 [2024-11-20 15:20:03.433944] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:27:06.929 [2024-11-20 15:20:07.081128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.929 [2024-11-20 15:20:07.081213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:27:06.930 [2024-11-20 15:20:07.081234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3653.124 ms 00:27:06.930 [2024-11-20 15:20:07.081250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.930 [2024-11-20 15:20:07.121457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.930 [2024-11-20 15:20:07.121528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:06.930 [2024-11-20 15:20:07.121548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.954 ms 00:27:06.930 [2024-11-20 15:20:07.121565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.930 [2024-11-20 15:20:07.121762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.930 [2024-11-20 15:20:07.121784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:06.930 [2024-11-20 15:20:07.121799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:27:06.930 [2024-11-20 15:20:07.121823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.930 [2024-11-20 15:20:07.169788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.930 [2024-11-20 15:20:07.169859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:06.930 [2024-11-20 15:20:07.169878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.970 ms 00:27:06.930 [2024-11-20 15:20:07.169894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.930 [2024-11-20 15:20:07.169948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.930 [2024-11-20 15:20:07.169969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:06.930 [2024-11-20 15:20:07.169984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:06.930 [2024-11-20 15:20:07.170000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.930 [2024-11-20 15:20:07.170530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.930 [2024-11-20 15:20:07.170556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:06.930 [2024-11-20 15:20:07.170569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.450 ms 00:27:06.930 [2024-11-20 15:20:07.170586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.930 [2024-11-20 15:20:07.170696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.930 [2024-11-20 15:20:07.170714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:06.930 [2024-11-20 15:20:07.170772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:27:06.930 [2024-11-20 15:20:07.170791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.930 [2024-11-20 15:20:07.191735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.930 [2024-11-20 15:20:07.192022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:06.930 [2024-11-20 15:20:07.192052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.951 ms 00:27:06.930 [2024-11-20 15:20:07.192069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.930 [2024-11-20 15:20:07.215772] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:06.930 [2024-11-20 15:20:07.219394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.930 [2024-11-20 15:20:07.219447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:06.930 [2024-11-20 15:20:07.219469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.226 ms 00:27:06.930 [2024-11-20 15:20:07.219483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.930 [2024-11-20 15:20:07.316907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.930 [2024-11-20 15:20:07.316980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:27:06.930 [2024-11-20 15:20:07.317004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.513 ms 00:27:06.930 [2024-11-20 15:20:07.317017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.930 [2024-11-20 15:20:07.317221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.930 [2024-11-20 15:20:07.317242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:06.930 [2024-11-20 15:20:07.317262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:27:06.930 [2024-11-20 15:20:07.317276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.930 [2024-11-20 15:20:07.356171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.930 [2024-11-20 15:20:07.356244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:27:06.930 [2024-11-20 15:20:07.356269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.882 ms 00:27:06.930 [2024-11-20 15:20:07.356282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.930 [2024-11-20 15:20:07.391730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.930 [2024-11-20 15:20:07.391786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:27:06.930 [2024-11-20 15:20:07.391809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.405 ms 00:27:06.930 [2024-11-20 15:20:07.391822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.930 [2024-11-20 15:20:07.392569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.930 [2024-11-20 15:20:07.392589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:06.930 [2024-11-20 15:20:07.392607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.693 ms 00:27:06.930 [2024-11-20 15:20:07.392623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.930 [2024-11-20 15:20:07.497440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.930 [2024-11-20 15:20:07.497503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:27:06.930 [2024-11-20 15:20:07.497530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 104.912 ms 00:27:06.930 [2024-11-20 15:20:07.497544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.930 [2024-11-20 15:20:07.535280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.930 [2024-11-20 15:20:07.535343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:27:06.930 [2024-11-20 15:20:07.535367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.682 ms 00:27:06.930 [2024-11-20 15:20:07.535381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.930 [2024-11-20 15:20:07.572117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.930 [2024-11-20 15:20:07.572177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:27:06.930 [2024-11-20 15:20:07.572199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.737 ms 00:27:06.930 [2024-11-20 15:20:07.572212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.930 [2024-11-20 15:20:07.609126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.930 [2024-11-20 15:20:07.609181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:06.930 [2024-11-20 15:20:07.609204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.914 ms 00:27:06.930 [2024-11-20 15:20:07.609216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.930 [2024-11-20 15:20:07.609274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.930 [2024-11-20 15:20:07.609288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:06.930 [2024-11-20 15:20:07.609310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:06.930 [2024-11-20 15:20:07.609323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.930 [2024-11-20 15:20:07.609441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.930 [2024-11-20 15:20:07.609457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:06.930 [2024-11-20 15:20:07.609478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:27:06.930 [2024-11-20 15:20:07.609491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.930 [2024-11-20 15:20:07.610713] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4210.039 ms, result 0 00:27:06.930 { 00:27:06.930 "name": "ftl0", 00:27:06.930 "uuid": "449efaf6-bbc5-4a3f-99c9-acfa73fd2d6c" 00:27:06.930 } 00:27:06.930 15:20:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:27:06.930 15:20:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:27:07.188 15:20:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:27:07.188 15:20:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:27:07.188 15:20:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:27:07.446 /dev/nbd0 00:27:07.446 15:20:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:27:07.446 15:20:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:27:07.446 15:20:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:27:07.446 15:20:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:07.446 15:20:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:07.446 15:20:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:27:07.446 15:20:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:27:07.446 15:20:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:07.446 15:20:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:07.446 15:20:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:27:07.446 1+0 records in 00:27:07.446 1+0 records out 00:27:07.446 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000512513 s, 8.0 MB/s 00:27:07.446 15:20:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:27:07.446 15:20:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:27:07.446 15:20:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:27:07.446 15:20:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:07.446 15:20:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:27:07.446 15:20:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:27:07.447 [2024-11-20 15:20:08.250293] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:27:07.447 [2024-11-20 15:20:08.250744] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81457 ] 00:27:07.705 [2024-11-20 15:20:08.434462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:07.964 [2024-11-20 15:20:08.580993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:09.351  [2024-11-20T15:20:11.125Z] Copying: 189/1024 [MB] (189 MBps) [2024-11-20T15:20:12.063Z] Copying: 373/1024 [MB] (184 MBps) [2024-11-20T15:20:13.001Z] Copying: 563/1024 [MB] (190 MBps) [2024-11-20T15:20:14.378Z] Copying: 755/1024 [MB] (191 MBps) [2024-11-20T15:20:14.638Z] Copying: 947/1024 [MB] (192 MBps) [2024-11-20T15:20:16.018Z] Copying: 1024/1024 [MB] (average 189 MBps) 00:27:15.182 00:27:15.182 15:20:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:17.088 15:20:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:27:17.088 [2024-11-20 15:20:17.526188] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:27:17.088 [2024-11-20 15:20:17.526348] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81551 ] 00:27:17.088 [2024-11-20 15:20:17.715688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:17.088 [2024-11-20 15:20:17.857982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:18.477  [2024-11-20T15:20:20.256Z] Copying: 15/1024 [MB] (15 MBps) [2024-11-20T15:20:21.631Z] Copying: 31/1024 [MB] (15 MBps) [2024-11-20T15:20:22.565Z] Copying: 47/1024 [MB] (16 MBps) [2024-11-20T15:20:23.500Z] Copying: 63/1024 [MB] (16 MBps) [2024-11-20T15:20:24.435Z] Copying: 80/1024 [MB] (16 MBps) [2024-11-20T15:20:25.434Z] Copying: 96/1024 [MB] (16 MBps) [2024-11-20T15:20:26.396Z] Copying: 113/1024 [MB] (16 MBps) [2024-11-20T15:20:27.334Z] Copying: 130/1024 [MB] (17 MBps) [2024-11-20T15:20:28.269Z] Copying: 146/1024 [MB] (16 MBps) [2024-11-20T15:20:29.647Z] Copying: 164/1024 [MB] (17 MBps) [2024-11-20T15:20:30.592Z] Copying: 181/1024 [MB] (17 MBps) [2024-11-20T15:20:31.527Z] Copying: 198/1024 [MB] (17 MBps) [2024-11-20T15:20:32.464Z] Copying: 215/1024 [MB] (17 MBps) [2024-11-20T15:20:33.400Z] Copying: 233/1024 [MB] (17 MBps) [2024-11-20T15:20:34.337Z] Copying: 250/1024 [MB] (17 MBps) [2024-11-20T15:20:35.274Z] Copying: 267/1024 [MB] (16 MBps) [2024-11-20T15:20:36.239Z] Copying: 284/1024 [MB] (17 MBps) [2024-11-20T15:20:37.618Z] Copying: 302/1024 [MB] (17 MBps) [2024-11-20T15:20:38.556Z] Copying: 319/1024 [MB] (17 MBps) [2024-11-20T15:20:39.494Z] Copying: 337/1024 [MB] (17 MBps) [2024-11-20T15:20:40.454Z] Copying: 354/1024 [MB] (16 MBps) [2024-11-20T15:20:41.390Z] Copying: 370/1024 [MB] (16 MBps) [2024-11-20T15:20:42.326Z] Copying: 387/1024 [MB] (16 MBps) [2024-11-20T15:20:43.263Z] Copying: 404/1024 [MB] (16 MBps) [2024-11-20T15:20:44.640Z] Copying: 421/1024 [MB] (16 MBps) [2024-11-20T15:20:45.575Z] Copying: 438/1024 [MB] (16 MBps) [2024-11-20T15:20:46.512Z] Copying: 455/1024 [MB] (17 MBps) [2024-11-20T15:20:47.448Z] Copying: 472/1024 [MB] (17 MBps) [2024-11-20T15:20:48.384Z] Copying: 491/1024 [MB] (18 MBps) [2024-11-20T15:20:49.321Z] Copying: 509/1024 [MB] (18 MBps) [2024-11-20T15:20:50.276Z] Copying: 526/1024 [MB] (16 MBps) [2024-11-20T15:20:51.211Z] Copying: 542/1024 [MB] (16 MBps) [2024-11-20T15:20:52.589Z] Copying: 560/1024 [MB] (17 MBps) [2024-11-20T15:20:53.527Z] Copying: 577/1024 [MB] (17 MBps) [2024-11-20T15:20:54.464Z] Copying: 594/1024 [MB] (16 MBps) [2024-11-20T15:20:55.406Z] Copying: 611/1024 [MB] (17 MBps) [2024-11-20T15:20:56.343Z] Copying: 630/1024 [MB] (18 MBps) [2024-11-20T15:20:57.281Z] Copying: 647/1024 [MB] (17 MBps) [2024-11-20T15:20:58.216Z] Copying: 665/1024 [MB] (17 MBps) [2024-11-20T15:20:59.593Z] Copying: 683/1024 [MB] (17 MBps) [2024-11-20T15:21:00.529Z] Copying: 701/1024 [MB] (17 MBps) [2024-11-20T15:21:01.490Z] Copying: 718/1024 [MB] (17 MBps) [2024-11-20T15:21:02.429Z] Copying: 736/1024 [MB] (17 MBps) [2024-11-20T15:21:03.364Z] Copying: 753/1024 [MB] (17 MBps) [2024-11-20T15:21:04.300Z] Copying: 770/1024 [MB] (17 MBps) [2024-11-20T15:21:05.242Z] Copying: 787/1024 [MB] (16 MBps) [2024-11-20T15:21:06.178Z] Copying: 804/1024 [MB] (17 MBps) [2024-11-20T15:21:07.553Z] Copying: 821/1024 [MB] (17 MBps) [2024-11-20T15:21:08.489Z] Copying: 839/1024 [MB] (17 MBps) [2024-11-20T15:21:09.447Z] Copying: 856/1024 [MB] (17 MBps) [2024-11-20T15:21:10.383Z] Copying: 873/1024 [MB] (17 MBps) [2024-11-20T15:21:11.317Z] Copying: 890/1024 [MB] (17 MBps) [2024-11-20T15:21:12.250Z] Copying: 908/1024 [MB] (17 MBps) [2024-11-20T15:21:13.185Z] Copying: 926/1024 [MB] (17 MBps) [2024-11-20T15:21:14.563Z] Copying: 943/1024 [MB] (17 MBps) [2024-11-20T15:21:15.162Z] Copying: 960/1024 [MB] (17 MBps) [2024-11-20T15:21:16.539Z] Copying: 977/1024 [MB] (17 MBps) [2024-11-20T15:21:17.476Z] Copying: 995/1024 [MB] (17 MBps) [2024-11-20T15:21:18.043Z] Copying: 1012/1024 [MB] (17 MBps) [2024-11-20T15:21:19.421Z] Copying: 1024/1024 [MB] (average 17 MBps) 00:28:18.585 00:28:18.585 15:21:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:28:18.585 15:21:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:28:18.585 15:21:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:28:18.844 [2024-11-20 15:21:19.544454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.844 [2024-11-20 15:21:19.544766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:18.844 [2024-11-20 15:21:19.544801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:18.844 [2024-11-20 15:21:19.544817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.844 [2024-11-20 15:21:19.544877] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:18.844 [2024-11-20 15:21:19.549525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.844 [2024-11-20 15:21:19.549564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:18.844 [2024-11-20 15:21:19.549582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.628 ms 00:28:18.844 [2024-11-20 15:21:19.549598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.844 [2024-11-20 15:21:19.551486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.844 [2024-11-20 15:21:19.551641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:18.844 [2024-11-20 15:21:19.551678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.845 ms 00:28:18.844 [2024-11-20 15:21:19.551691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.844 [2024-11-20 15:21:19.565956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.844 [2024-11-20 15:21:19.566114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:18.844 [2024-11-20 15:21:19.566147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.253 ms 00:28:18.844 [2024-11-20 15:21:19.566159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.844 [2024-11-20 15:21:19.571334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.844 [2024-11-20 15:21:19.571371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:18.844 [2024-11-20 15:21:19.571391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.135 ms 00:28:18.844 [2024-11-20 15:21:19.571402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.844 [2024-11-20 15:21:19.615670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.844 [2024-11-20 15:21:19.615759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:18.844 [2024-11-20 15:21:19.615782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.245 ms 00:28:18.844 [2024-11-20 15:21:19.615793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.844 [2024-11-20 15:21:19.640027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.844 [2024-11-20 15:21:19.640211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:18.844 [2024-11-20 15:21:19.640242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.201 ms 00:28:18.844 [2024-11-20 15:21:19.640259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.844 [2024-11-20 15:21:19.640459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.844 [2024-11-20 15:21:19.640475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:18.844 [2024-11-20 15:21:19.640490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:28:18.844 [2024-11-20 15:21:19.640501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.105 [2024-11-20 15:21:19.678522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.105 [2024-11-20 15:21:19.678563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:19.105 [2024-11-20 15:21:19.678582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.056 ms 00:28:19.105 [2024-11-20 15:21:19.678592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.105 [2024-11-20 15:21:19.715145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.105 [2024-11-20 15:21:19.715195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:19.105 [2024-11-20 15:21:19.715214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.562 ms 00:28:19.105 [2024-11-20 15:21:19.715225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.105 [2024-11-20 15:21:19.756181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.105 [2024-11-20 15:21:19.756271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:19.105 [2024-11-20 15:21:19.756295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.959 ms 00:28:19.105 [2024-11-20 15:21:19.756306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.105 [2024-11-20 15:21:19.792703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.105 [2024-11-20 15:21:19.792752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:19.105 [2024-11-20 15:21:19.792770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.301 ms 00:28:19.105 [2024-11-20 15:21:19.792781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.105 [2024-11-20 15:21:19.792830] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:19.105 [2024-11-20 15:21:19.792856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.792873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.792885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.792900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.792912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.792927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.792939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.792957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.792968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.792983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.792993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.793008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.793019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.793034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.793045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.793060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.793071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.793085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.793097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.793112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.793123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.793140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.793152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.793169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.793180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.793193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.793206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.793220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.793232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.793248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.793262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.793278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.793290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.793304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.793316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.793331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.793342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.793356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.793368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.793386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.793397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.793411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.793423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.793437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.793449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.793464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.793475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.793491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.793513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.793529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.793540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.793554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.793566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.793580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.793598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.793617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.793629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.793643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.793655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.793669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.793680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.793694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.793705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.793733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.793746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.793761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.793772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.793787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.793820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.793835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:19.105 [2024-11-20 15:21:19.793846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:19.106 [2024-11-20 15:21:19.793866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:19.106 [2024-11-20 15:21:19.793878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:19.106 [2024-11-20 15:21:19.793893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:19.106 [2024-11-20 15:21:19.793904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:19.106 [2024-11-20 15:21:19.793932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:19.106 [2024-11-20 15:21:19.793944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:19.106 [2024-11-20 15:21:19.793958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:19.106 [2024-11-20 15:21:19.793969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:19.106 [2024-11-20 15:21:19.793983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:19.106 [2024-11-20 15:21:19.793999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:19.106 [2024-11-20 15:21:19.794014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:19.106 [2024-11-20 15:21:19.794025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:19.106 [2024-11-20 15:21:19.794039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:19.106 [2024-11-20 15:21:19.794050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:19.106 [2024-11-20 15:21:19.794065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:19.106 [2024-11-20 15:21:19.794075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:19.106 [2024-11-20 15:21:19.794093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:19.106 [2024-11-20 15:21:19.794103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:19.106 [2024-11-20 15:21:19.794118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:19.106 [2024-11-20 15:21:19.794129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:19.106 [2024-11-20 15:21:19.794143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:19.106 [2024-11-20 15:21:19.794154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:19.106 [2024-11-20 15:21:19.794167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:19.106 [2024-11-20 15:21:19.794178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:19.106 [2024-11-20 15:21:19.794195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:19.106 [2024-11-20 15:21:19.794206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:19.106 [2024-11-20 15:21:19.794223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:19.106 [2024-11-20 15:21:19.794234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:19.106 [2024-11-20 15:21:19.794248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:19.106 [2024-11-20 15:21:19.794267] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:19.106 [2024-11-20 15:21:19.794281] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 449efaf6-bbc5-4a3f-99c9-acfa73fd2d6c 00:28:19.106 [2024-11-20 15:21:19.794299] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:19.106 [2024-11-20 15:21:19.794338] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:19.106 [2024-11-20 15:21:19.794348] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:19.106 [2024-11-20 15:21:19.794366] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:19.106 [2024-11-20 15:21:19.794376] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:19.106 [2024-11-20 15:21:19.794390] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:19.106 [2024-11-20 15:21:19.794400] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:19.106 [2024-11-20 15:21:19.794413] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:19.106 [2024-11-20 15:21:19.794422] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:19.106 [2024-11-20 15:21:19.794435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.106 [2024-11-20 15:21:19.794446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:19.106 [2024-11-20 15:21:19.794460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.611 ms 00:28:19.106 [2024-11-20 15:21:19.794471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.106 [2024-11-20 15:21:19.816303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.106 [2024-11-20 15:21:19.816357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:19.106 [2024-11-20 15:21:19.816375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.804 ms 00:28:19.106 [2024-11-20 15:21:19.816386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.106 [2024-11-20 15:21:19.817009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.106 [2024-11-20 15:21:19.817028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:19.106 [2024-11-20 15:21:19.817043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.589 ms 00:28:19.106 [2024-11-20 15:21:19.817054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.106 [2024-11-20 15:21:19.888842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.106 [2024-11-20 15:21:19.888895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:19.106 [2024-11-20 15:21:19.888915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.106 [2024-11-20 15:21:19.888926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.106 [2024-11-20 15:21:19.889012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.106 [2024-11-20 15:21:19.889024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:19.106 [2024-11-20 15:21:19.889038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.106 [2024-11-20 15:21:19.889049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.106 [2024-11-20 15:21:19.889175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.106 [2024-11-20 15:21:19.889194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:19.106 [2024-11-20 15:21:19.889209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.106 [2024-11-20 15:21:19.889219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.106 [2024-11-20 15:21:19.889248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.106 [2024-11-20 15:21:19.889260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:19.106 [2024-11-20 15:21:19.889274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.106 [2024-11-20 15:21:19.889284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.366 [2024-11-20 15:21:20.029848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.366 [2024-11-20 15:21:20.030147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:19.366 [2024-11-20 15:21:20.030183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.366 [2024-11-20 15:21:20.030196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.366 [2024-11-20 15:21:20.140665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.366 [2024-11-20 15:21:20.140777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:19.366 [2024-11-20 15:21:20.140799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.366 [2024-11-20 15:21:20.140825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.366 [2024-11-20 15:21:20.140992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.366 [2024-11-20 15:21:20.141006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:19.366 [2024-11-20 15:21:20.141022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.366 [2024-11-20 15:21:20.141038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.366 [2024-11-20 15:21:20.141110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.366 [2024-11-20 15:21:20.141123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:19.366 [2024-11-20 15:21:20.141138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.366 [2024-11-20 15:21:20.141149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.366 [2024-11-20 15:21:20.141294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.366 [2024-11-20 15:21:20.141308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:19.366 [2024-11-20 15:21:20.141323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.366 [2024-11-20 15:21:20.141336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.366 [2024-11-20 15:21:20.141387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.366 [2024-11-20 15:21:20.141401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:19.366 [2024-11-20 15:21:20.141415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.366 [2024-11-20 15:21:20.141425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.366 [2024-11-20 15:21:20.141490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.366 [2024-11-20 15:21:20.141507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:19.366 [2024-11-20 15:21:20.141521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.366 [2024-11-20 15:21:20.141532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.366 [2024-11-20 15:21:20.141605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.366 [2024-11-20 15:21:20.141618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:19.366 [2024-11-20 15:21:20.141632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.366 [2024-11-20 15:21:20.141643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.366 [2024-11-20 15:21:20.141826] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 598.298 ms, result 0 00:28:19.366 true 00:28:19.366 15:21:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 81303 00:28:19.366 15:21:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid81303 00:28:19.366 15:21:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:28:19.626 [2024-11-20 15:21:20.287958] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:28:19.626 [2024-11-20 15:21:20.288108] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82177 ] 00:28:19.884 [2024-11-20 15:21:20.473900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:19.884 [2024-11-20 15:21:20.618486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:21.262  [2024-11-20T15:21:23.036Z] Copying: 190/1024 [MB] (190 MBps) [2024-11-20T15:21:24.414Z] Copying: 382/1024 [MB] (191 MBps) [2024-11-20T15:21:25.351Z] Copying: 579/1024 [MB] (196 MBps) [2024-11-20T15:21:26.288Z] Copying: 775/1024 [MB] (195 MBps) [2024-11-20T15:21:26.546Z] Copying: 971/1024 [MB] (195 MBps) [2024-11-20T15:21:27.948Z] Copying: 1024/1024 [MB] (average 193 MBps) 00:28:27.112 00:28:27.112 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 81303 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:28:27.112 15:21:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:27.112 [2024-11-20 15:21:27.687249] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:28:27.112 [2024-11-20 15:21:27.687403] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82253 ] 00:28:27.112 [2024-11-20 15:21:27.876108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:27.371 [2024-11-20 15:21:28.021237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:27.630 [2024-11-20 15:21:28.463496] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:27.630 [2024-11-20 15:21:28.463592] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:27.889 [2024-11-20 15:21:28.530696] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:28:27.889 [2024-11-20 15:21:28.531029] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:28:27.889 [2024-11-20 15:21:28.531238] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:28:28.149 [2024-11-20 15:21:28.808022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.149 [2024-11-20 15:21:28.808303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:28.149 [2024-11-20 15:21:28.808334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:28.149 [2024-11-20 15:21:28.808345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.149 [2024-11-20 15:21:28.808437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.149 [2024-11-20 15:21:28.808451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:28.149 [2024-11-20 15:21:28.808463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:28:28.149 [2024-11-20 15:21:28.808475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.149 [2024-11-20 15:21:28.808500] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:28.149 [2024-11-20 15:21:28.809603] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:28.149 [2024-11-20 15:21:28.809636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.149 [2024-11-20 15:21:28.809648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:28.149 [2024-11-20 15:21:28.809660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.144 ms 00:28:28.149 [2024-11-20 15:21:28.809672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.149 [2024-11-20 15:21:28.812215] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:28.149 [2024-11-20 15:21:28.832532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.149 [2024-11-20 15:21:28.832592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:28.149 [2024-11-20 15:21:28.832609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.350 ms 00:28:28.149 [2024-11-20 15:21:28.832621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.149 [2024-11-20 15:21:28.832707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.149 [2024-11-20 15:21:28.832747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:28.149 [2024-11-20 15:21:28.832760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:28:28.149 [2024-11-20 15:21:28.832771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.149 [2024-11-20 15:21:28.845440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.149 [2024-11-20 15:21:28.845487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:28.149 [2024-11-20 15:21:28.845505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.600 ms 00:28:28.149 [2024-11-20 15:21:28.845517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.149 [2024-11-20 15:21:28.845630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.149 [2024-11-20 15:21:28.845647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:28.149 [2024-11-20 15:21:28.845659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:28:28.149 [2024-11-20 15:21:28.845671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.149 [2024-11-20 15:21:28.845773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.149 [2024-11-20 15:21:28.845789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:28.149 [2024-11-20 15:21:28.845801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:28:28.149 [2024-11-20 15:21:28.845812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.149 [2024-11-20 15:21:28.845847] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:28.149 [2024-11-20 15:21:28.851779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.149 [2024-11-20 15:21:28.851814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:28.149 [2024-11-20 15:21:28.851829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.954 ms 00:28:28.149 [2024-11-20 15:21:28.851840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.149 [2024-11-20 15:21:28.851876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.149 [2024-11-20 15:21:28.851887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:28.149 [2024-11-20 15:21:28.851899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:28:28.149 [2024-11-20 15:21:28.851910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.149 [2024-11-20 15:21:28.851959] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:28.149 [2024-11-20 15:21:28.851986] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:28.149 [2024-11-20 15:21:28.852026] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:28.149 [2024-11-20 15:21:28.852047] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:28.149 [2024-11-20 15:21:28.852143] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:28.149 [2024-11-20 15:21:28.852158] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:28.149 [2024-11-20 15:21:28.852172] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:28.149 [2024-11-20 15:21:28.852187] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:28.149 [2024-11-20 15:21:28.852204] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:28.149 [2024-11-20 15:21:28.852216] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:28.149 [2024-11-20 15:21:28.852227] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:28.149 [2024-11-20 15:21:28.852239] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:28.149 [2024-11-20 15:21:28.852250] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:28.149 [2024-11-20 15:21:28.852263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.149 [2024-11-20 15:21:28.852274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:28.149 [2024-11-20 15:21:28.852285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:28:28.149 [2024-11-20 15:21:28.852296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.149 [2024-11-20 15:21:28.852370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.149 [2024-11-20 15:21:28.852386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:28.149 [2024-11-20 15:21:28.852397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:28:28.149 [2024-11-20 15:21:28.852408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.149 [2024-11-20 15:21:28.852510] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:28.149 [2024-11-20 15:21:28.852526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:28.149 [2024-11-20 15:21:28.852539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:28.149 [2024-11-20 15:21:28.852550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:28.149 [2024-11-20 15:21:28.852561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:28.149 [2024-11-20 15:21:28.852570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:28.149 [2024-11-20 15:21:28.852581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:28.149 [2024-11-20 15:21:28.852592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:28.149 [2024-11-20 15:21:28.852602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:28.149 [2024-11-20 15:21:28.852611] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:28.149 [2024-11-20 15:21:28.852623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:28.149 [2024-11-20 15:21:28.852646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:28.149 [2024-11-20 15:21:28.852656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:28.149 [2024-11-20 15:21:28.852666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:28.149 [2024-11-20 15:21:28.852676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:28.149 [2024-11-20 15:21:28.852686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:28.149 [2024-11-20 15:21:28.852696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:28.149 [2024-11-20 15:21:28.852705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:28.149 [2024-11-20 15:21:28.852714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:28.149 [2024-11-20 15:21:28.852741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:28.149 [2024-11-20 15:21:28.852751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:28.149 [2024-11-20 15:21:28.852761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:28.149 [2024-11-20 15:21:28.852770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:28.149 [2024-11-20 15:21:28.852780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:28.149 [2024-11-20 15:21:28.852790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:28.149 [2024-11-20 15:21:28.852799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:28.149 [2024-11-20 15:21:28.852808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:28.149 [2024-11-20 15:21:28.852817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:28.149 [2024-11-20 15:21:28.852827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:28.149 [2024-11-20 15:21:28.852836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:28.149 [2024-11-20 15:21:28.852846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:28.149 [2024-11-20 15:21:28.852856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:28.149 [2024-11-20 15:21:28.852865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:28.150 [2024-11-20 15:21:28.852875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:28.150 [2024-11-20 15:21:28.852884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:28.150 [2024-11-20 15:21:28.852894] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:28.150 [2024-11-20 15:21:28.852902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:28.150 [2024-11-20 15:21:28.852911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:28.150 [2024-11-20 15:21:28.852920] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:28.150 [2024-11-20 15:21:28.852929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:28.150 [2024-11-20 15:21:28.852938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:28.150 [2024-11-20 15:21:28.852947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:28.150 [2024-11-20 15:21:28.852959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:28.150 [2024-11-20 15:21:28.852968] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:28.150 [2024-11-20 15:21:28.852979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:28.150 [2024-11-20 15:21:28.852989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:28.150 [2024-11-20 15:21:28.853004] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:28.150 [2024-11-20 15:21:28.853017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:28.150 [2024-11-20 15:21:28.853027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:28.150 [2024-11-20 15:21:28.853037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:28.150 [2024-11-20 15:21:28.853047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:28.150 [2024-11-20 15:21:28.853056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:28.150 [2024-11-20 15:21:28.853066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:28.150 [2024-11-20 15:21:28.853077] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:28.150 [2024-11-20 15:21:28.853090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:28.150 [2024-11-20 15:21:28.853102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:28.150 [2024-11-20 15:21:28.853112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:28.150 [2024-11-20 15:21:28.853123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:28.150 [2024-11-20 15:21:28.853134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:28.150 [2024-11-20 15:21:28.853144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:28.150 [2024-11-20 15:21:28.853155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:28.150 [2024-11-20 15:21:28.853165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:28.150 [2024-11-20 15:21:28.853176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:28.150 [2024-11-20 15:21:28.853186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:28.150 [2024-11-20 15:21:28.853197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:28.150 [2024-11-20 15:21:28.853207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:28.150 [2024-11-20 15:21:28.853217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:28.150 [2024-11-20 15:21:28.853227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:28.150 [2024-11-20 15:21:28.853238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:28.150 [2024-11-20 15:21:28.853248] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:28.150 [2024-11-20 15:21:28.853259] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:28.150 [2024-11-20 15:21:28.853270] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:28.150 [2024-11-20 15:21:28.853280] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:28.150 [2024-11-20 15:21:28.853290] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:28.150 [2024-11-20 15:21:28.853301] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:28.150 [2024-11-20 15:21:28.853313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.150 [2024-11-20 15:21:28.853324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:28.150 [2024-11-20 15:21:28.853335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.859 ms 00:28:28.150 [2024-11-20 15:21:28.853345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.150 [2024-11-20 15:21:28.900120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.150 [2024-11-20 15:21:28.900192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:28.150 [2024-11-20 15:21:28.900211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.790 ms 00:28:28.150 [2024-11-20 15:21:28.900224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.150 [2024-11-20 15:21:28.900344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.150 [2024-11-20 15:21:28.900362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:28.150 [2024-11-20 15:21:28.900374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:28:28.150 [2024-11-20 15:21:28.900386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.150 [2024-11-20 15:21:28.970586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.150 [2024-11-20 15:21:28.970658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:28.150 [2024-11-20 15:21:28.970682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.190 ms 00:28:28.150 [2024-11-20 15:21:28.970694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.150 [2024-11-20 15:21:28.970789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.150 [2024-11-20 15:21:28.970803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:28.150 [2024-11-20 15:21:28.970816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:28.150 [2024-11-20 15:21:28.970826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.150 [2024-11-20 15:21:28.971639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.150 [2024-11-20 15:21:28.971661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:28.150 [2024-11-20 15:21:28.971673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.727 ms 00:28:28.150 [2024-11-20 15:21:28.971684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.150 [2024-11-20 15:21:28.971845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.150 [2024-11-20 15:21:28.971860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:28.150 [2024-11-20 15:21:28.971872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:28:28.150 [2024-11-20 15:21:28.971892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.410 [2024-11-20 15:21:28.996798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.410 [2024-11-20 15:21:28.996862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:28.410 [2024-11-20 15:21:28.996881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.919 ms 00:28:28.410 [2024-11-20 15:21:28.996894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.410 [2024-11-20 15:21:29.018508] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:28.410 [2024-11-20 15:21:29.018746] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:28.410 [2024-11-20 15:21:29.018774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.410 [2024-11-20 15:21:29.018789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:28.410 [2024-11-20 15:21:29.018803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.725 ms 00:28:28.410 [2024-11-20 15:21:29.018814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.410 [2024-11-20 15:21:29.049657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.410 [2024-11-20 15:21:29.049731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:28.410 [2024-11-20 15:21:29.049770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.832 ms 00:28:28.410 [2024-11-20 15:21:29.049782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.410 [2024-11-20 15:21:29.069942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.410 [2024-11-20 15:21:29.070008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:28.410 [2024-11-20 15:21:29.070025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.102 ms 00:28:28.410 [2024-11-20 15:21:29.070037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.410 [2024-11-20 15:21:29.089622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.410 [2024-11-20 15:21:29.089870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:28.410 [2024-11-20 15:21:29.089898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.555 ms 00:28:28.410 [2024-11-20 15:21:29.089910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.410 [2024-11-20 15:21:29.090809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.410 [2024-11-20 15:21:29.090834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:28.410 [2024-11-20 15:21:29.090848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.743 ms 00:28:28.410 [2024-11-20 15:21:29.090858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.410 [2024-11-20 15:21:29.188517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.410 [2024-11-20 15:21:29.188608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:28.410 [2024-11-20 15:21:29.188629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.785 ms 00:28:28.410 [2024-11-20 15:21:29.188642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.410 [2024-11-20 15:21:29.200637] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:28.410 [2024-11-20 15:21:29.205804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.410 [2024-11-20 15:21:29.205842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:28.410 [2024-11-20 15:21:29.205861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.112 ms 00:28:28.410 [2024-11-20 15:21:29.205873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.410 [2024-11-20 15:21:29.206022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.410 [2024-11-20 15:21:29.206038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:28.410 [2024-11-20 15:21:29.206067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:28.410 [2024-11-20 15:21:29.206079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.410 [2024-11-20 15:21:29.206180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.410 [2024-11-20 15:21:29.206195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:28.410 [2024-11-20 15:21:29.206207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:28:28.410 [2024-11-20 15:21:29.206218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.410 [2024-11-20 15:21:29.206248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.410 [2024-11-20 15:21:29.206272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:28.410 [2024-11-20 15:21:29.206284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:28.410 [2024-11-20 15:21:29.206295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.410 [2024-11-20 15:21:29.206341] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:28.410 [2024-11-20 15:21:29.206370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.410 [2024-11-20 15:21:29.206382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:28.410 [2024-11-20 15:21:29.206394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:28:28.410 [2024-11-20 15:21:29.206405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.669 [2024-11-20 15:21:29.246818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.669 [2024-11-20 15:21:29.247004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:28.669 [2024-11-20 15:21:29.247148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.446 ms 00:28:28.669 [2024-11-20 15:21:29.247188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.669 [2024-11-20 15:21:29.247319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.669 [2024-11-20 15:21:29.247380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:28.669 [2024-11-20 15:21:29.247454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:28:28.669 [2024-11-20 15:21:29.247486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.669 [2024-11-20 15:21:29.249049] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 441.177 ms, result 0 00:28:29.606  [2024-11-20T15:21:31.380Z] Copying: 25/1024 [MB] (25 MBps) [2024-11-20T15:21:32.316Z] Copying: 50/1024 [MB] (24 MBps) [2024-11-20T15:21:33.694Z] Copying: 78/1024 [MB] (28 MBps) [2024-11-20T15:21:34.261Z] Copying: 104/1024 [MB] (25 MBps) [2024-11-20T15:21:35.637Z] Copying: 129/1024 [MB] (25 MBps) [2024-11-20T15:21:36.575Z] Copying: 155/1024 [MB] (25 MBps) [2024-11-20T15:21:37.511Z] Copying: 181/1024 [MB] (26 MBps) [2024-11-20T15:21:38.448Z] Copying: 208/1024 [MB] (26 MBps) [2024-11-20T15:21:39.390Z] Copying: 233/1024 [MB] (25 MBps) [2024-11-20T15:21:40.361Z] Copying: 259/1024 [MB] (25 MBps) [2024-11-20T15:21:41.299Z] Copying: 285/1024 [MB] (26 MBps) [2024-11-20T15:21:42.677Z] Copying: 311/1024 [MB] (25 MBps) [2024-11-20T15:21:43.246Z] Copying: 337/1024 [MB] (25 MBps) [2024-11-20T15:21:44.624Z] Copying: 362/1024 [MB] (25 MBps) [2024-11-20T15:21:45.561Z] Copying: 387/1024 [MB] (24 MBps) [2024-11-20T15:21:46.497Z] Copying: 413/1024 [MB] (25 MBps) [2024-11-20T15:21:47.436Z] Copying: 439/1024 [MB] (26 MBps) [2024-11-20T15:21:48.374Z] Copying: 465/1024 [MB] (26 MBps) [2024-11-20T15:21:49.312Z] Copying: 490/1024 [MB] (24 MBps) [2024-11-20T15:21:50.248Z] Copying: 515/1024 [MB] (24 MBps) [2024-11-20T15:21:51.625Z] Copying: 540/1024 [MB] (25 MBps) [2024-11-20T15:21:52.564Z] Copying: 565/1024 [MB] (25 MBps) [2024-11-20T15:21:53.502Z] Copying: 590/1024 [MB] (25 MBps) [2024-11-20T15:21:54.438Z] Copying: 615/1024 [MB] (25 MBps) [2024-11-20T15:21:55.374Z] Copying: 640/1024 [MB] (24 MBps) [2024-11-20T15:21:56.312Z] Copying: 666/1024 [MB] (26 MBps) [2024-11-20T15:21:57.250Z] Copying: 692/1024 [MB] (25 MBps) [2024-11-20T15:21:58.626Z] Copying: 718/1024 [MB] (26 MBps) [2024-11-20T15:21:59.564Z] Copying: 744/1024 [MB] (25 MBps) [2024-11-20T15:22:00.501Z] Copying: 770/1024 [MB] (26 MBps) [2024-11-20T15:22:01.437Z] Copying: 795/1024 [MB] (25 MBps) [2024-11-20T15:22:02.466Z] Copying: 822/1024 [MB] (27 MBps) [2024-11-20T15:22:03.402Z] Copying: 851/1024 [MB] (28 MBps) [2024-11-20T15:22:04.338Z] Copying: 877/1024 [MB] (26 MBps) [2024-11-20T15:22:05.277Z] Copying: 903/1024 [MB] (25 MBps) [2024-11-20T15:22:06.215Z] Copying: 929/1024 [MB] (26 MBps) [2024-11-20T15:22:07.594Z] Copying: 956/1024 [MB] (26 MBps) [2024-11-20T15:22:08.532Z] Copying: 985/1024 [MB] (28 MBps) [2024-11-20T15:22:09.468Z] Copying: 1012/1024 [MB] (27 MBps) [2024-11-20T15:22:09.468Z] Copying: 1023/1024 [MB] (11 MBps) [2024-11-20T15:22:09.468Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-11-20 15:22:09.434361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.632 [2024-11-20 15:22:09.434446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:08.632 [2024-11-20 15:22:09.434469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:08.632 [2024-11-20 15:22:09.434481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.632 [2024-11-20 15:22:09.438365] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:08.632 [2024-11-20 15:22:09.444136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.632 [2024-11-20 15:22:09.444173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:08.632 [2024-11-20 15:22:09.444197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.726 ms 00:29:08.632 [2024-11-20 15:22:09.444209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.632 [2024-11-20 15:22:09.453484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.632 [2024-11-20 15:22:09.453527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:08.632 [2024-11-20 15:22:09.453542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.455 ms 00:29:08.632 [2024-11-20 15:22:09.453553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.891 [2024-11-20 15:22:09.479116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.891 [2024-11-20 15:22:09.479219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:08.891 [2024-11-20 15:22:09.479249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.581 ms 00:29:08.891 [2024-11-20 15:22:09.479261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.891 [2024-11-20 15:22:09.484726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.891 [2024-11-20 15:22:09.484956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:08.891 [2024-11-20 15:22:09.485047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.417 ms 00:29:08.891 [2024-11-20 15:22:09.485084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.891 [2024-11-20 15:22:09.524457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.891 [2024-11-20 15:22:09.524649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:08.891 [2024-11-20 15:22:09.524792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.336 ms 00:29:08.891 [2024-11-20 15:22:09.524811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.891 [2024-11-20 15:22:09.546989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.891 [2024-11-20 15:22:09.547144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:08.891 [2024-11-20 15:22:09.547168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.164 ms 00:29:08.891 [2024-11-20 15:22:09.547180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.891 [2024-11-20 15:22:09.654364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.891 [2024-11-20 15:22:09.654588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:08.891 [2024-11-20 15:22:09.654649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 107.303 ms 00:29:08.891 [2024-11-20 15:22:09.654661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.891 [2024-11-20 15:22:09.694299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.891 [2024-11-20 15:22:09.694367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:08.891 [2024-11-20 15:22:09.694387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.658 ms 00:29:08.891 [2024-11-20 15:22:09.694399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.152 [2024-11-20 15:22:09.734505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.152 [2024-11-20 15:22:09.734591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:09.152 [2024-11-20 15:22:09.734613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.112 ms 00:29:09.152 [2024-11-20 15:22:09.734624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.152 [2024-11-20 15:22:09.773187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.152 [2024-11-20 15:22:09.773391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:09.152 [2024-11-20 15:22:09.773418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.540 ms 00:29:09.152 [2024-11-20 15:22:09.773430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.152 [2024-11-20 15:22:09.809526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.152 [2024-11-20 15:22:09.809569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:09.152 [2024-11-20 15:22:09.809591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.970 ms 00:29:09.152 [2024-11-20 15:22:09.809602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.152 [2024-11-20 15:22:09.809644] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:09.152 [2024-11-20 15:22:09.809663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 111104 / 261120 wr_cnt: 1 state: open 00:29:09.152 [2024-11-20 15:22:09.809678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:09.152 [2024-11-20 15:22:09.809691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:09.152 [2024-11-20 15:22:09.809702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:09.152 [2024-11-20 15:22:09.809714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:09.152 [2024-11-20 15:22:09.809739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:09.152 [2024-11-20 15:22:09.809751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:09.152 [2024-11-20 15:22:09.809763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:09.152 [2024-11-20 15:22:09.809775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:09.152 [2024-11-20 15:22:09.809786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:09.152 [2024-11-20 15:22:09.809798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:09.152 [2024-11-20 15:22:09.809810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:09.152 [2024-11-20 15:22:09.809821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:09.152 [2024-11-20 15:22:09.809832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:09.152 [2024-11-20 15:22:09.809843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:09.152 [2024-11-20 15:22:09.809854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:09.152 [2024-11-20 15:22:09.809865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:09.152 [2024-11-20 15:22:09.809876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:09.152 [2024-11-20 15:22:09.809888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:09.152 [2024-11-20 15:22:09.809900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:09.152 [2024-11-20 15:22:09.809911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:09.152 [2024-11-20 15:22:09.809922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:09.152 [2024-11-20 15:22:09.809933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:09.152 [2024-11-20 15:22:09.809944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:09.152 [2024-11-20 15:22:09.809955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:09.152 [2024-11-20 15:22:09.809967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:09.152 [2024-11-20 15:22:09.809977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:09.152 [2024-11-20 15:22:09.809988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:09.152 [2024-11-20 15:22:09.809999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:09.152 [2024-11-20 15:22:09.810011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:09.152 [2024-11-20 15:22:09.810025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:09.152 [2024-11-20 15:22:09.810038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:09.152 [2024-11-20 15:22:09.810049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:09.152 [2024-11-20 15:22:09.810060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:09.152 [2024-11-20 15:22:09.810072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:09.152 [2024-11-20 15:22:09.810083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:09.152 [2024-11-20 15:22:09.810094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:09.152 [2024-11-20 15:22:09.810105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:09.152 [2024-11-20 15:22:09.810117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:09.152 [2024-11-20 15:22:09.810128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:09.152 [2024-11-20 15:22:09.810139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:09.152 [2024-11-20 15:22:09.810150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:09.152 [2024-11-20 15:22:09.810162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:09.152 [2024-11-20 15:22:09.810173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:09.152 [2024-11-20 15:22:09.810185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:09.152 [2024-11-20 15:22:09.810197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:09.152 [2024-11-20 15:22:09.810208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:09.152 [2024-11-20 15:22:09.810219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:09.152 [2024-11-20 15:22:09.810230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:09.152 [2024-11-20 15:22:09.810241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:09.152 [2024-11-20 15:22:09.810253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:09.152 [2024-11-20 15:22:09.810264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:09.152 [2024-11-20 15:22:09.810275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:09.152 [2024-11-20 15:22:09.810287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:09.152 [2024-11-20 15:22:09.810298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:09.152 [2024-11-20 15:22:09.810309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:09.152 [2024-11-20 15:22:09.810320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:09.152 [2024-11-20 15:22:09.810332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:09.152 [2024-11-20 15:22:09.810343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:09.152 [2024-11-20 15:22:09.810354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:09.153 [2024-11-20 15:22:09.810365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:09.153 [2024-11-20 15:22:09.810376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:09.153 [2024-11-20 15:22:09.810388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:09.153 [2024-11-20 15:22:09.810400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:09.153 [2024-11-20 15:22:09.810411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:09.153 [2024-11-20 15:22:09.810422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:09.153 [2024-11-20 15:22:09.810433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:09.153 [2024-11-20 15:22:09.810445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:09.153 [2024-11-20 15:22:09.810455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:09.153 [2024-11-20 15:22:09.810467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:09.153 [2024-11-20 15:22:09.810478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:09.153 [2024-11-20 15:22:09.810488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:09.153 [2024-11-20 15:22:09.810499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:09.153 [2024-11-20 15:22:09.810511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:09.153 [2024-11-20 15:22:09.810522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:09.153 [2024-11-20 15:22:09.810534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:09.153 [2024-11-20 15:22:09.810545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:09.153 [2024-11-20 15:22:09.810556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:09.153 [2024-11-20 15:22:09.810567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:09.153 [2024-11-20 15:22:09.810578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:09.153 [2024-11-20 15:22:09.810589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:09.153 [2024-11-20 15:22:09.810600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:09.153 [2024-11-20 15:22:09.810611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:09.153 [2024-11-20 15:22:09.810623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:09.153 [2024-11-20 15:22:09.810634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:09.153 [2024-11-20 15:22:09.810645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:09.153 [2024-11-20 15:22:09.810656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:09.153 [2024-11-20 15:22:09.810667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:09.153 [2024-11-20 15:22:09.810678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:09.153 [2024-11-20 15:22:09.810690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:09.153 [2024-11-20 15:22:09.810700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:09.153 [2024-11-20 15:22:09.810711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:09.153 [2024-11-20 15:22:09.810767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:09.153 [2024-11-20 15:22:09.810779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:09.153 [2024-11-20 15:22:09.810790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:09.153 [2024-11-20 15:22:09.810802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:09.153 [2024-11-20 15:22:09.810813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:09.153 [2024-11-20 15:22:09.810825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:09.153 [2024-11-20 15:22:09.810837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:09.153 [2024-11-20 15:22:09.810849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:09.153 [2024-11-20 15:22:09.810868] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:09.153 [2024-11-20 15:22:09.810879] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 449efaf6-bbc5-4a3f-99c9-acfa73fd2d6c 00:29:09.153 [2024-11-20 15:22:09.810891] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 111104 00:29:09.153 [2024-11-20 15:22:09.810909] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 112064 00:29:09.153 [2024-11-20 15:22:09.810932] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 111104 00:29:09.153 [2024-11-20 15:22:09.810943] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0086 00:29:09.153 [2024-11-20 15:22:09.810953] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:09.153 [2024-11-20 15:22:09.810964] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:09.153 [2024-11-20 15:22:09.810974] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:09.153 [2024-11-20 15:22:09.810984] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:09.153 [2024-11-20 15:22:09.810993] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:09.153 [2024-11-20 15:22:09.811003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.153 [2024-11-20 15:22:09.811014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:09.153 [2024-11-20 15:22:09.811026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.362 ms 00:29:09.153 [2024-11-20 15:22:09.811037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.153 [2024-11-20 15:22:09.831471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.153 [2024-11-20 15:22:09.831510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:09.153 [2024-11-20 15:22:09.831523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.428 ms 00:29:09.153 [2024-11-20 15:22:09.831534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.153 [2024-11-20 15:22:09.832199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.153 [2024-11-20 15:22:09.832216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:09.153 [2024-11-20 15:22:09.832230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.641 ms 00:29:09.153 [2024-11-20 15:22:09.832247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.153 [2024-11-20 15:22:09.887700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:09.153 [2024-11-20 15:22:09.887757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:09.153 [2024-11-20 15:22:09.887774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:09.153 [2024-11-20 15:22:09.887786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.153 [2024-11-20 15:22:09.887861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:09.153 [2024-11-20 15:22:09.887875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:09.153 [2024-11-20 15:22:09.887886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:09.153 [2024-11-20 15:22:09.887904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.153 [2024-11-20 15:22:09.887983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:09.153 [2024-11-20 15:22:09.887998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:09.153 [2024-11-20 15:22:09.888009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:09.153 [2024-11-20 15:22:09.888020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.153 [2024-11-20 15:22:09.888038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:09.153 [2024-11-20 15:22:09.888050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:09.153 [2024-11-20 15:22:09.888061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:09.153 [2024-11-20 15:22:09.888072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.412 [2024-11-20 15:22:10.023414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:09.412 [2024-11-20 15:22:10.023494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:09.412 [2024-11-20 15:22:10.023514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:09.412 [2024-11-20 15:22:10.023525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.412 [2024-11-20 15:22:10.133177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:09.413 [2024-11-20 15:22:10.133251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:09.413 [2024-11-20 15:22:10.133269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:09.413 [2024-11-20 15:22:10.133281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.413 [2024-11-20 15:22:10.133411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:09.413 [2024-11-20 15:22:10.133424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:09.413 [2024-11-20 15:22:10.133435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:09.413 [2024-11-20 15:22:10.133446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.413 [2024-11-20 15:22:10.133505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:09.413 [2024-11-20 15:22:10.133519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:09.413 [2024-11-20 15:22:10.133530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:09.413 [2024-11-20 15:22:10.133541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.413 [2024-11-20 15:22:10.133694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:09.413 [2024-11-20 15:22:10.133708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:09.413 [2024-11-20 15:22:10.133736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:09.413 [2024-11-20 15:22:10.133748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.413 [2024-11-20 15:22:10.133790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:09.413 [2024-11-20 15:22:10.133803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:09.413 [2024-11-20 15:22:10.133814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:09.413 [2024-11-20 15:22:10.133826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.413 [2024-11-20 15:22:10.133874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:09.413 [2024-11-20 15:22:10.133890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:09.413 [2024-11-20 15:22:10.133901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:09.413 [2024-11-20 15:22:10.133911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.413 [2024-11-20 15:22:10.133961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:09.413 [2024-11-20 15:22:10.133973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:09.413 [2024-11-20 15:22:10.133985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:09.413 [2024-11-20 15:22:10.133994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.413 [2024-11-20 15:22:10.134138] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 702.212 ms, result 0 00:29:11.447 00:29:11.447 00:29:11.447 15:22:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:29:12.837 15:22:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:12.837 [2024-11-20 15:22:13.595029] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:29:12.837 [2024-11-20 15:22:13.595436] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82701 ] 00:29:13.097 [2024-11-20 15:22:13.780750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:13.097 [2024-11-20 15:22:13.924981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:13.665 [2024-11-20 15:22:14.356848] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:13.665 [2024-11-20 15:22:14.356932] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:13.925 [2024-11-20 15:22:14.524229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.925 [2024-11-20 15:22:14.524288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:13.925 [2024-11-20 15:22:14.524311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:13.925 [2024-11-20 15:22:14.524322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.925 [2024-11-20 15:22:14.524383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.925 [2024-11-20 15:22:14.524395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:13.925 [2024-11-20 15:22:14.524410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:29:13.925 [2024-11-20 15:22:14.524421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.925 [2024-11-20 15:22:14.524443] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:13.925 [2024-11-20 15:22:14.525456] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:13.925 [2024-11-20 15:22:14.525479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.925 [2024-11-20 15:22:14.525491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:13.925 [2024-11-20 15:22:14.525502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.043 ms 00:29:13.926 [2024-11-20 15:22:14.525513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.926 [2024-11-20 15:22:14.528112] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:13.926 [2024-11-20 15:22:14.548174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.926 [2024-11-20 15:22:14.548216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:13.926 [2024-11-20 15:22:14.548232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.094 ms 00:29:13.926 [2024-11-20 15:22:14.548244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.926 [2024-11-20 15:22:14.548321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.926 [2024-11-20 15:22:14.548335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:13.926 [2024-11-20 15:22:14.548347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:29:13.926 [2024-11-20 15:22:14.548358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.926 [2024-11-20 15:22:14.560418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.926 [2024-11-20 15:22:14.560464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:13.926 [2024-11-20 15:22:14.560481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.000 ms 00:29:13.926 [2024-11-20 15:22:14.560501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.926 [2024-11-20 15:22:14.560604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.926 [2024-11-20 15:22:14.560618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:13.926 [2024-11-20 15:22:14.560630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:29:13.926 [2024-11-20 15:22:14.560642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.926 [2024-11-20 15:22:14.560733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.926 [2024-11-20 15:22:14.560747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:13.926 [2024-11-20 15:22:14.560758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:29:13.926 [2024-11-20 15:22:14.560768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.926 [2024-11-20 15:22:14.560806] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:13.926 [2024-11-20 15:22:14.566542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.926 [2024-11-20 15:22:14.566580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:13.926 [2024-11-20 15:22:14.566594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.760 ms 00:29:13.926 [2024-11-20 15:22:14.566609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.926 [2024-11-20 15:22:14.566647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.926 [2024-11-20 15:22:14.566660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:13.926 [2024-11-20 15:22:14.566672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:29:13.926 [2024-11-20 15:22:14.566684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.926 [2024-11-20 15:22:14.566759] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:13.926 [2024-11-20 15:22:14.566789] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:13.926 [2024-11-20 15:22:14.566829] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:13.926 [2024-11-20 15:22:14.566854] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:13.926 [2024-11-20 15:22:14.566950] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:13.926 [2024-11-20 15:22:14.566964] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:13.926 [2024-11-20 15:22:14.566979] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:13.926 [2024-11-20 15:22:14.566993] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:13.926 [2024-11-20 15:22:14.567006] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:13.926 [2024-11-20 15:22:14.567018] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:13.926 [2024-11-20 15:22:14.567029] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:13.926 [2024-11-20 15:22:14.567040] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:13.926 [2024-11-20 15:22:14.567055] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:13.926 [2024-11-20 15:22:14.567067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.926 [2024-11-20 15:22:14.567077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:13.926 [2024-11-20 15:22:14.567088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:29:13.926 [2024-11-20 15:22:14.567099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.926 [2024-11-20 15:22:14.567178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.926 [2024-11-20 15:22:14.567190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:13.926 [2024-11-20 15:22:14.567201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:29:13.926 [2024-11-20 15:22:14.567212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.926 [2024-11-20 15:22:14.567318] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:13.926 [2024-11-20 15:22:14.567334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:13.926 [2024-11-20 15:22:14.567346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:13.926 [2024-11-20 15:22:14.567357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:13.926 [2024-11-20 15:22:14.567367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:13.926 [2024-11-20 15:22:14.567377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:13.926 [2024-11-20 15:22:14.567387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:13.926 [2024-11-20 15:22:14.567396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:13.926 [2024-11-20 15:22:14.567406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:13.926 [2024-11-20 15:22:14.567415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:13.926 [2024-11-20 15:22:14.567424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:13.926 [2024-11-20 15:22:14.567437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:13.926 [2024-11-20 15:22:14.567447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:13.926 [2024-11-20 15:22:14.567456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:13.926 [2024-11-20 15:22:14.567466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:13.926 [2024-11-20 15:22:14.567489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:13.926 [2024-11-20 15:22:14.567499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:13.926 [2024-11-20 15:22:14.567509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:13.926 [2024-11-20 15:22:14.567519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:13.926 [2024-11-20 15:22:14.567528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:13.926 [2024-11-20 15:22:14.567538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:13.926 [2024-11-20 15:22:14.567547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:13.926 [2024-11-20 15:22:14.567557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:13.926 [2024-11-20 15:22:14.567567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:13.926 [2024-11-20 15:22:14.567575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:13.926 [2024-11-20 15:22:14.567584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:13.927 [2024-11-20 15:22:14.567593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:13.927 [2024-11-20 15:22:14.567602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:13.927 [2024-11-20 15:22:14.567610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:13.927 [2024-11-20 15:22:14.567619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:13.927 [2024-11-20 15:22:14.567628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:13.927 [2024-11-20 15:22:14.567637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:13.927 [2024-11-20 15:22:14.567647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:13.927 [2024-11-20 15:22:14.567656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:13.927 [2024-11-20 15:22:14.567665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:13.927 [2024-11-20 15:22:14.567674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:13.927 [2024-11-20 15:22:14.567683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:13.927 [2024-11-20 15:22:14.567692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:13.927 [2024-11-20 15:22:14.567701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:13.927 [2024-11-20 15:22:14.567710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:13.927 [2024-11-20 15:22:14.567730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:13.927 [2024-11-20 15:22:14.567740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:13.927 [2024-11-20 15:22:14.567749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:13.927 [2024-11-20 15:22:14.567760] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:13.927 [2024-11-20 15:22:14.567773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:13.927 [2024-11-20 15:22:14.567784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:13.927 [2024-11-20 15:22:14.567794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:13.927 [2024-11-20 15:22:14.567805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:13.927 [2024-11-20 15:22:14.567816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:13.927 [2024-11-20 15:22:14.567826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:13.927 [2024-11-20 15:22:14.567835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:13.927 [2024-11-20 15:22:14.567844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:13.927 [2024-11-20 15:22:14.567853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:13.927 [2024-11-20 15:22:14.567865] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:13.927 [2024-11-20 15:22:14.567877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:13.927 [2024-11-20 15:22:14.567889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:13.927 [2024-11-20 15:22:14.567900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:13.927 [2024-11-20 15:22:14.567911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:13.927 [2024-11-20 15:22:14.567921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:13.927 [2024-11-20 15:22:14.567932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:13.927 [2024-11-20 15:22:14.567942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:13.927 [2024-11-20 15:22:14.567953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:13.927 [2024-11-20 15:22:14.567963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:13.927 [2024-11-20 15:22:14.567974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:13.927 [2024-11-20 15:22:14.567985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:13.927 [2024-11-20 15:22:14.567995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:13.927 [2024-11-20 15:22:14.568006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:13.927 [2024-11-20 15:22:14.568016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:13.927 [2024-11-20 15:22:14.568026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:13.927 [2024-11-20 15:22:14.568036] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:13.927 [2024-11-20 15:22:14.568052] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:13.927 [2024-11-20 15:22:14.568065] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:13.927 [2024-11-20 15:22:14.568076] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:13.927 [2024-11-20 15:22:14.568086] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:13.927 [2024-11-20 15:22:14.568096] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:13.927 [2024-11-20 15:22:14.568116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.927 [2024-11-20 15:22:14.568128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:13.927 [2024-11-20 15:22:14.568139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.854 ms 00:29:13.927 [2024-11-20 15:22:14.568149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.927 [2024-11-20 15:22:14.616394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.927 [2024-11-20 15:22:14.616445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:13.927 [2024-11-20 15:22:14.616462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.263 ms 00:29:13.927 [2024-11-20 15:22:14.616474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.927 [2024-11-20 15:22:14.616579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.927 [2024-11-20 15:22:14.616592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:13.927 [2024-11-20 15:22:14.616603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:29:13.927 [2024-11-20 15:22:14.616613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.927 [2024-11-20 15:22:14.682114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.927 [2024-11-20 15:22:14.682173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:13.927 [2024-11-20 15:22:14.682191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.484 ms 00:29:13.927 [2024-11-20 15:22:14.682203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.927 [2024-11-20 15:22:14.682283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.927 [2024-11-20 15:22:14.682296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:13.927 [2024-11-20 15:22:14.682313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:13.927 [2024-11-20 15:22:14.682325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.927 [2024-11-20 15:22:14.683127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.927 [2024-11-20 15:22:14.683144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:13.927 [2024-11-20 15:22:14.683157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.722 ms 00:29:13.927 [2024-11-20 15:22:14.683168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.927 [2024-11-20 15:22:14.683312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.927 [2024-11-20 15:22:14.683327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:13.927 [2024-11-20 15:22:14.683338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:29:13.927 [2024-11-20 15:22:14.683356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.927 [2024-11-20 15:22:14.707766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.927 [2024-11-20 15:22:14.707827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:13.927 [2024-11-20 15:22:14.707852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.424 ms 00:29:13.927 [2024-11-20 15:22:14.707864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.927 [2024-11-20 15:22:14.728824] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:29:13.927 [2024-11-20 15:22:14.728883] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:13.927 [2024-11-20 15:22:14.728903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.927 [2024-11-20 15:22:14.728916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:13.927 [2024-11-20 15:22:14.728931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.888 ms 00:29:13.927 [2024-11-20 15:22:14.728942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.187 [2024-11-20 15:22:14.760901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.187 [2024-11-20 15:22:14.760968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:14.187 [2024-11-20 15:22:14.760986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.949 ms 00:29:14.187 [2024-11-20 15:22:14.760999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.187 [2024-11-20 15:22:14.780462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.187 [2024-11-20 15:22:14.780529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:14.187 [2024-11-20 15:22:14.780546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.430 ms 00:29:14.187 [2024-11-20 15:22:14.780557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.187 [2024-11-20 15:22:14.799035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.187 [2024-11-20 15:22:14.799082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:14.187 [2024-11-20 15:22:14.799098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.459 ms 00:29:14.187 [2024-11-20 15:22:14.799108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.187 [2024-11-20 15:22:14.799970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.187 [2024-11-20 15:22:14.800002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:14.187 [2024-11-20 15:22:14.800015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.743 ms 00:29:14.187 [2024-11-20 15:22:14.800031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.187 [2024-11-20 15:22:14.897882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.187 [2024-11-20 15:22:14.897980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:14.187 [2024-11-20 15:22:14.898008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.980 ms 00:29:14.187 [2024-11-20 15:22:14.898020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.187 [2024-11-20 15:22:14.911881] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:14.187 [2024-11-20 15:22:14.916875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.187 [2024-11-20 15:22:14.916913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:14.187 [2024-11-20 15:22:14.916930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.780 ms 00:29:14.187 [2024-11-20 15:22:14.916943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.187 [2024-11-20 15:22:14.917094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.187 [2024-11-20 15:22:14.917110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:14.187 [2024-11-20 15:22:14.917122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:29:14.187 [2024-11-20 15:22:14.917138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.187 [2024-11-20 15:22:14.919261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.187 [2024-11-20 15:22:14.919302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:14.187 [2024-11-20 15:22:14.919317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.077 ms 00:29:14.187 [2024-11-20 15:22:14.919327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.187 [2024-11-20 15:22:14.919382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.187 [2024-11-20 15:22:14.919393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:14.187 [2024-11-20 15:22:14.919404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:14.187 [2024-11-20 15:22:14.919415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.187 [2024-11-20 15:22:14.919466] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:14.187 [2024-11-20 15:22:14.919480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.187 [2024-11-20 15:22:14.919491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:14.188 [2024-11-20 15:22:14.919502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:29:14.188 [2024-11-20 15:22:14.919513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.188 [2024-11-20 15:22:14.957056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.188 [2024-11-20 15:22:14.957105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:14.188 [2024-11-20 15:22:14.957121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.582 ms 00:29:14.188 [2024-11-20 15:22:14.957141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.188 [2024-11-20 15:22:14.957234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.188 [2024-11-20 15:22:14.957248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:14.188 [2024-11-20 15:22:14.957259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:29:14.188 [2024-11-20 15:22:14.957270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.188 [2024-11-20 15:22:14.958809] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 434.696 ms, result 0 00:29:15.565  [2024-11-20T15:22:17.338Z] Copying: 1212/1048576 [kB] (1212 kBps) [2024-11-20T15:22:18.274Z] Copying: 10236/1048576 [kB] (9024 kBps) [2024-11-20T15:22:19.279Z] Copying: 44/1024 [MB] (34 MBps) [2024-11-20T15:22:20.214Z] Copying: 78/1024 [MB] (33 MBps) [2024-11-20T15:22:21.588Z] Copying: 112/1024 [MB] (34 MBps) [2024-11-20T15:22:22.246Z] Copying: 147/1024 [MB] (34 MBps) [2024-11-20T15:22:23.182Z] Copying: 182/1024 [MB] (35 MBps) [2024-11-20T15:22:24.560Z] Copying: 217/1024 [MB] (34 MBps) [2024-11-20T15:22:25.496Z] Copying: 250/1024 [MB] (33 MBps) [2024-11-20T15:22:26.448Z] Copying: 284/1024 [MB] (33 MBps) [2024-11-20T15:22:27.382Z] Copying: 318/1024 [MB] (34 MBps) [2024-11-20T15:22:28.318Z] Copying: 354/1024 [MB] (36 MBps) [2024-11-20T15:22:29.253Z] Copying: 390/1024 [MB] (35 MBps) [2024-11-20T15:22:30.190Z] Copying: 424/1024 [MB] (34 MBps) [2024-11-20T15:22:31.568Z] Copying: 459/1024 [MB] (34 MBps) [2024-11-20T15:22:32.597Z] Copying: 494/1024 [MB] (35 MBps) [2024-11-20T15:22:33.164Z] Copying: 529/1024 [MB] (35 MBps) [2024-11-20T15:22:34.545Z] Copying: 565/1024 [MB] (35 MBps) [2024-11-20T15:22:35.482Z] Copying: 599/1024 [MB] (34 MBps) [2024-11-20T15:22:36.420Z] Copying: 634/1024 [MB] (34 MBps) [2024-11-20T15:22:37.356Z] Copying: 669/1024 [MB] (34 MBps) [2024-11-20T15:22:38.297Z] Copying: 704/1024 [MB] (34 MBps) [2024-11-20T15:22:39.235Z] Copying: 739/1024 [MB] (35 MBps) [2024-11-20T15:22:40.170Z] Copying: 774/1024 [MB] (35 MBps) [2024-11-20T15:22:41.549Z] Copying: 808/1024 [MB] (33 MBps) [2024-11-20T15:22:42.486Z] Copying: 841/1024 [MB] (32 MBps) [2024-11-20T15:22:43.423Z] Copying: 874/1024 [MB] (33 MBps) [2024-11-20T15:22:44.360Z] Copying: 907/1024 [MB] (32 MBps) [2024-11-20T15:22:45.297Z] Copying: 941/1024 [MB] (33 MBps) [2024-11-20T15:22:46.235Z] Copying: 973/1024 [MB] (32 MBps) [2024-11-20T15:22:46.803Z] Copying: 1007/1024 [MB] (33 MBps) [2024-11-20T15:22:47.374Z] Copying: 1024/1024 [MB] (average 32 MBps)[2024-11-20 15:22:47.093679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.538 [2024-11-20 15:22:47.093819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:46.538 [2024-11-20 15:22:47.093839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:46.538 [2024-11-20 15:22:47.093851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.538 [2024-11-20 15:22:47.093883] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:46.538 [2024-11-20 15:22:47.099481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.538 [2024-11-20 15:22:47.099525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:46.538 [2024-11-20 15:22:47.099540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.584 ms 00:29:46.538 [2024-11-20 15:22:47.099552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.538 [2024-11-20 15:22:47.099840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.538 [2024-11-20 15:22:47.099862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:46.538 [2024-11-20 15:22:47.099881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:29:46.538 [2024-11-20 15:22:47.099892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.538 [2024-11-20 15:22:47.109907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.538 [2024-11-20 15:22:47.109957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:46.538 [2024-11-20 15:22:47.109974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.012 ms 00:29:46.538 [2024-11-20 15:22:47.109986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.538 [2024-11-20 15:22:47.115414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.538 [2024-11-20 15:22:47.115476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:46.538 [2024-11-20 15:22:47.115500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.402 ms 00:29:46.538 [2024-11-20 15:22:47.115510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.538 [2024-11-20 15:22:47.157143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.538 [2024-11-20 15:22:47.157198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:46.538 [2024-11-20 15:22:47.157217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.621 ms 00:29:46.538 [2024-11-20 15:22:47.157228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.538 [2024-11-20 15:22:47.182101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.538 [2024-11-20 15:22:47.182174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:46.538 [2024-11-20 15:22:47.182194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.851 ms 00:29:46.538 [2024-11-20 15:22:47.182206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.538 [2024-11-20 15:22:47.184048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.538 [2024-11-20 15:22:47.184089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:46.538 [2024-11-20 15:22:47.184103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.697 ms 00:29:46.538 [2024-11-20 15:22:47.184115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.538 [2024-11-20 15:22:47.224058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.538 [2024-11-20 15:22:47.224130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:46.538 [2024-11-20 15:22:47.224150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.972 ms 00:29:46.538 [2024-11-20 15:22:47.224162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.538 [2024-11-20 15:22:47.261297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.538 [2024-11-20 15:22:47.261358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:46.538 [2024-11-20 15:22:47.261393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.128 ms 00:29:46.538 [2024-11-20 15:22:47.261404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.538 [2024-11-20 15:22:47.299344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.538 [2024-11-20 15:22:47.299402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:46.538 [2024-11-20 15:22:47.299419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.928 ms 00:29:46.538 [2024-11-20 15:22:47.299431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.538 [2024-11-20 15:22:47.337320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.538 [2024-11-20 15:22:47.337381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:46.538 [2024-11-20 15:22:47.337400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.806 ms 00:29:46.538 [2024-11-20 15:22:47.337411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.538 [2024-11-20 15:22:47.337503] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:46.538 [2024-11-20 15:22:47.337526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:46.538 [2024-11-20 15:22:47.337541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:29:46.538 [2024-11-20 15:22:47.337553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:46.538 [2024-11-20 15:22:47.337565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:46.538 [2024-11-20 15:22:47.337584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:46.538 [2024-11-20 15:22:47.337595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:46.538 [2024-11-20 15:22:47.337606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:46.538 [2024-11-20 15:22:47.337618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:46.538 [2024-11-20 15:22:47.337629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:46.538 [2024-11-20 15:22:47.337641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:46.538 [2024-11-20 15:22:47.337652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:46.538 [2024-11-20 15:22:47.337663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:46.538 [2024-11-20 15:22:47.337674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:46.538 [2024-11-20 15:22:47.337685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:46.538 [2024-11-20 15:22:47.337695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:46.538 [2024-11-20 15:22:47.337706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:46.538 [2024-11-20 15:22:47.337716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:46.538 [2024-11-20 15:22:47.337736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:46.538 [2024-11-20 15:22:47.337748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:46.538 [2024-11-20 15:22:47.337759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:46.538 [2024-11-20 15:22:47.337770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:46.538 [2024-11-20 15:22:47.337781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:46.538 [2024-11-20 15:22:47.337792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.337804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.337815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.337826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.337837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.337848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.337859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.337871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.337883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.337894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.337906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.337917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.337928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.337940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.337951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.337962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.337973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.337984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.337996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.338007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.338018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.338028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.338039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.338050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.338061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.338072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.338083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.338096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.338107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.338117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.338129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.338140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.338151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.338162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.338174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.338185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.338196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.338207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.338218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.338229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.338240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.338250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.338261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.338272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.338283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.338295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.338306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.338316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.338326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.338337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.338348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.338358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.338376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.338388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.338399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.338410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.338421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.338432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.338443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.338455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.338466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.338478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.338489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.338499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.338510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.338520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.338531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.338542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.338553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.338563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.338574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.338587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.338598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.338609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.338620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.338631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.338641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.338652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:46.539 [2024-11-20 15:22:47.338671] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:46.539 [2024-11-20 15:22:47.338682] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 449efaf6-bbc5-4a3f-99c9-acfa73fd2d6c 00:29:46.539 [2024-11-20 15:22:47.338693] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:29:46.539 [2024-11-20 15:22:47.338704] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 153536 00:29:46.539 [2024-11-20 15:22:47.338714] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 151552 00:29:46.539 [2024-11-20 15:22:47.338738] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0131 00:29:46.539 [2024-11-20 15:22:47.338748] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:46.539 [2024-11-20 15:22:47.338759] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:46.539 [2024-11-20 15:22:47.338769] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:46.539 [2024-11-20 15:22:47.338791] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:46.539 [2024-11-20 15:22:47.338800] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:46.539 [2024-11-20 15:22:47.338810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.539 [2024-11-20 15:22:47.338822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:46.539 [2024-11-20 15:22:47.338833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.312 ms 00:29:46.539 [2024-11-20 15:22:47.338844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.540 [2024-11-20 15:22:47.359488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.540 [2024-11-20 15:22:47.359553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:46.540 [2024-11-20 15:22:47.359571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.625 ms 00:29:46.540 [2024-11-20 15:22:47.359582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.540 [2024-11-20 15:22:47.360227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.540 [2024-11-20 15:22:47.360244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:46.540 [2024-11-20 15:22:47.360256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.608 ms 00:29:46.540 [2024-11-20 15:22:47.360267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.799 [2024-11-20 15:22:47.415072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.799 [2024-11-20 15:22:47.415135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:46.799 [2024-11-20 15:22:47.415152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.799 [2024-11-20 15:22:47.415164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.799 [2024-11-20 15:22:47.415258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.799 [2024-11-20 15:22:47.415271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:46.799 [2024-11-20 15:22:47.415294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.799 [2024-11-20 15:22:47.415305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.799 [2024-11-20 15:22:47.415409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.799 [2024-11-20 15:22:47.415423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:46.799 [2024-11-20 15:22:47.415434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.799 [2024-11-20 15:22:47.415445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.799 [2024-11-20 15:22:47.415464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.799 [2024-11-20 15:22:47.415474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:46.799 [2024-11-20 15:22:47.415485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.799 [2024-11-20 15:22:47.415496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.799 [2024-11-20 15:22:47.551832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.799 [2024-11-20 15:22:47.551921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:46.799 [2024-11-20 15:22:47.551940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.799 [2024-11-20 15:22:47.551951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.059 [2024-11-20 15:22:47.657910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:47.059 [2024-11-20 15:22:47.657999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:47.059 [2024-11-20 15:22:47.658018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:47.059 [2024-11-20 15:22:47.658030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.059 [2024-11-20 15:22:47.658174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:47.059 [2024-11-20 15:22:47.658193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:47.059 [2024-11-20 15:22:47.658205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:47.059 [2024-11-20 15:22:47.658216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.059 [2024-11-20 15:22:47.658267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:47.059 [2024-11-20 15:22:47.658279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:47.059 [2024-11-20 15:22:47.658290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:47.059 [2024-11-20 15:22:47.658300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.059 [2024-11-20 15:22:47.658424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:47.059 [2024-11-20 15:22:47.658438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:47.059 [2024-11-20 15:22:47.658454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:47.059 [2024-11-20 15:22:47.658465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.059 [2024-11-20 15:22:47.658503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:47.059 [2024-11-20 15:22:47.658516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:47.059 [2024-11-20 15:22:47.658528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:47.059 [2024-11-20 15:22:47.658538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.059 [2024-11-20 15:22:47.658587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:47.059 [2024-11-20 15:22:47.658599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:47.059 [2024-11-20 15:22:47.658610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:47.059 [2024-11-20 15:22:47.658624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.059 [2024-11-20 15:22:47.658676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:47.059 [2024-11-20 15:22:47.658688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:47.059 [2024-11-20 15:22:47.658700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:47.059 [2024-11-20 15:22:47.658710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.059 [2024-11-20 15:22:47.658880] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 566.075 ms, result 0 00:29:47.997 00:29:47.997 00:29:48.256 15:22:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:50.162 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:29:50.162 15:22:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:50.162 [2024-11-20 15:22:50.709979] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:29:50.162 [2024-11-20 15:22:50.710120] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83066 ] 00:29:50.162 [2024-11-20 15:22:50.898276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:50.422 [2024-11-20 15:22:51.043624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:50.681 [2024-11-20 15:22:51.479009] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:50.681 [2024-11-20 15:22:51.479099] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:50.941 [2024-11-20 15:22:51.644991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.941 [2024-11-20 15:22:51.645062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:50.941 [2024-11-20 15:22:51.645087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:50.941 [2024-11-20 15:22:51.645098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.941 [2024-11-20 15:22:51.645156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.941 [2024-11-20 15:22:51.645170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:50.941 [2024-11-20 15:22:51.645185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:29:50.941 [2024-11-20 15:22:51.645196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.941 [2024-11-20 15:22:51.645219] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:50.941 [2024-11-20 15:22:51.646241] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:50.941 [2024-11-20 15:22:51.646274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.941 [2024-11-20 15:22:51.646286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:50.941 [2024-11-20 15:22:51.646299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.062 ms 00:29:50.941 [2024-11-20 15:22:51.646310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.941 [2024-11-20 15:22:51.648652] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:50.941 [2024-11-20 15:22:51.668227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.941 [2024-11-20 15:22:51.668273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:50.941 [2024-11-20 15:22:51.668291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.606 ms 00:29:50.941 [2024-11-20 15:22:51.668302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.941 [2024-11-20 15:22:51.668384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.941 [2024-11-20 15:22:51.668398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:50.941 [2024-11-20 15:22:51.668410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:29:50.941 [2024-11-20 15:22:51.668421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.941 [2024-11-20 15:22:51.681140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.941 [2024-11-20 15:22:51.681190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:50.941 [2024-11-20 15:22:51.681206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.654 ms 00:29:50.941 [2024-11-20 15:22:51.681228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.941 [2024-11-20 15:22:51.681338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.941 [2024-11-20 15:22:51.681354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:50.941 [2024-11-20 15:22:51.681366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:29:50.941 [2024-11-20 15:22:51.681377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.941 [2024-11-20 15:22:51.681457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.941 [2024-11-20 15:22:51.681470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:50.941 [2024-11-20 15:22:51.681481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:29:50.941 [2024-11-20 15:22:51.681492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.941 [2024-11-20 15:22:51.681529] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:50.941 [2024-11-20 15:22:51.687446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.941 [2024-11-20 15:22:51.687503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:50.941 [2024-11-20 15:22:51.687517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.942 ms 00:29:50.941 [2024-11-20 15:22:51.687535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.941 [2024-11-20 15:22:51.687572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.941 [2024-11-20 15:22:51.687584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:50.941 [2024-11-20 15:22:51.687596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:29:50.941 [2024-11-20 15:22:51.687608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.941 [2024-11-20 15:22:51.687652] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:50.941 [2024-11-20 15:22:51.687680] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:50.941 [2024-11-20 15:22:51.687731] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:50.941 [2024-11-20 15:22:51.687756] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:50.941 [2024-11-20 15:22:51.687855] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:50.941 [2024-11-20 15:22:51.687869] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:50.941 [2024-11-20 15:22:51.687882] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:50.941 [2024-11-20 15:22:51.687895] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:50.941 [2024-11-20 15:22:51.687908] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:50.941 [2024-11-20 15:22:51.687919] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:50.941 [2024-11-20 15:22:51.687930] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:50.941 [2024-11-20 15:22:51.687942] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:50.941 [2024-11-20 15:22:51.687957] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:50.941 [2024-11-20 15:22:51.687969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.941 [2024-11-20 15:22:51.687980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:50.941 [2024-11-20 15:22:51.687991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:29:50.941 [2024-11-20 15:22:51.688001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.941 [2024-11-20 15:22:51.688077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.941 [2024-11-20 15:22:51.688089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:50.941 [2024-11-20 15:22:51.688100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:29:50.941 [2024-11-20 15:22:51.688110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.941 [2024-11-20 15:22:51.688217] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:50.941 [2024-11-20 15:22:51.688233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:50.941 [2024-11-20 15:22:51.688244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:50.941 [2024-11-20 15:22:51.688255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:50.941 [2024-11-20 15:22:51.688267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:50.942 [2024-11-20 15:22:51.688277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:50.942 [2024-11-20 15:22:51.688287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:50.942 [2024-11-20 15:22:51.688297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:50.942 [2024-11-20 15:22:51.688307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:50.942 [2024-11-20 15:22:51.688316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:50.942 [2024-11-20 15:22:51.688325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:50.942 [2024-11-20 15:22:51.688336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:50.942 [2024-11-20 15:22:51.688345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:50.942 [2024-11-20 15:22:51.688355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:50.942 [2024-11-20 15:22:51.688364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:50.942 [2024-11-20 15:22:51.688385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:50.942 [2024-11-20 15:22:51.688394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:50.942 [2024-11-20 15:22:51.688404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:50.942 [2024-11-20 15:22:51.688413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:50.942 [2024-11-20 15:22:51.688423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:50.942 [2024-11-20 15:22:51.688433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:50.942 [2024-11-20 15:22:51.688443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:50.942 [2024-11-20 15:22:51.688452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:50.942 [2024-11-20 15:22:51.688462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:50.942 [2024-11-20 15:22:51.688471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:50.942 [2024-11-20 15:22:51.688480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:50.942 [2024-11-20 15:22:51.688489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:50.942 [2024-11-20 15:22:51.688498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:50.942 [2024-11-20 15:22:51.688507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:50.942 [2024-11-20 15:22:51.688517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:50.942 [2024-11-20 15:22:51.688525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:50.942 [2024-11-20 15:22:51.688535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:50.942 [2024-11-20 15:22:51.688544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:50.942 [2024-11-20 15:22:51.688553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:50.942 [2024-11-20 15:22:51.688562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:50.942 [2024-11-20 15:22:51.688571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:50.942 [2024-11-20 15:22:51.688580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:50.942 [2024-11-20 15:22:51.688589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:50.942 [2024-11-20 15:22:51.688598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:50.942 [2024-11-20 15:22:51.688607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:50.942 [2024-11-20 15:22:51.688616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:50.942 [2024-11-20 15:22:51.688625] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:50.942 [2024-11-20 15:22:51.688634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:50.942 [2024-11-20 15:22:51.688646] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:50.942 [2024-11-20 15:22:51.688656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:50.942 [2024-11-20 15:22:51.688666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:50.942 [2024-11-20 15:22:51.688676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:50.942 [2024-11-20 15:22:51.688686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:50.942 [2024-11-20 15:22:51.688696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:50.942 [2024-11-20 15:22:51.688706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:50.942 [2024-11-20 15:22:51.688715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:50.942 [2024-11-20 15:22:51.688734] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:50.942 [2024-11-20 15:22:51.688744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:50.942 [2024-11-20 15:22:51.688755] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:50.942 [2024-11-20 15:22:51.688769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:50.942 [2024-11-20 15:22:51.688781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:50.942 [2024-11-20 15:22:51.688792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:50.942 [2024-11-20 15:22:51.688803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:50.942 [2024-11-20 15:22:51.688814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:50.942 [2024-11-20 15:22:51.688825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:50.942 [2024-11-20 15:22:51.688837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:50.942 [2024-11-20 15:22:51.688847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:50.942 [2024-11-20 15:22:51.688858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:50.942 [2024-11-20 15:22:51.688868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:50.942 [2024-11-20 15:22:51.688879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:50.942 [2024-11-20 15:22:51.688889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:50.942 [2024-11-20 15:22:51.688900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:50.942 [2024-11-20 15:22:51.688910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:50.942 [2024-11-20 15:22:51.688921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:50.942 [2024-11-20 15:22:51.688931] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:50.942 [2024-11-20 15:22:51.688947] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:50.942 [2024-11-20 15:22:51.688959] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:50.942 [2024-11-20 15:22:51.688969] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:50.942 [2024-11-20 15:22:51.688978] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:50.942 [2024-11-20 15:22:51.688989] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:50.942 [2024-11-20 15:22:51.689000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.942 [2024-11-20 15:22:51.689012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:50.942 [2024-11-20 15:22:51.689022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.839 ms 00:29:50.942 [2024-11-20 15:22:51.689032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.942 [2024-11-20 15:22:51.739756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.942 [2024-11-20 15:22:51.739824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:50.942 [2024-11-20 15:22:51.739843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.746 ms 00:29:50.942 [2024-11-20 15:22:51.739855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.942 [2024-11-20 15:22:51.739981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.942 [2024-11-20 15:22:51.739993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:50.942 [2024-11-20 15:22:51.740004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:29:50.942 [2024-11-20 15:22:51.740015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.201 [2024-11-20 15:22:51.805631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.201 [2024-11-20 15:22:51.805691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:51.201 [2024-11-20 15:22:51.805709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.596 ms 00:29:51.201 [2024-11-20 15:22:51.805728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.201 [2024-11-20 15:22:51.805808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.201 [2024-11-20 15:22:51.805820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:51.201 [2024-11-20 15:22:51.805837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:51.201 [2024-11-20 15:22:51.805849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.201 [2024-11-20 15:22:51.806640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.201 [2024-11-20 15:22:51.806716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:51.201 [2024-11-20 15:22:51.806748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.707 ms 00:29:51.201 [2024-11-20 15:22:51.806759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.201 [2024-11-20 15:22:51.806906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.201 [2024-11-20 15:22:51.806921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:51.201 [2024-11-20 15:22:51.806932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:29:51.201 [2024-11-20 15:22:51.806951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.201 [2024-11-20 15:22:51.829153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.201 [2024-11-20 15:22:51.829203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:51.201 [2024-11-20 15:22:51.829225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.213 ms 00:29:51.201 [2024-11-20 15:22:51.829236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.201 [2024-11-20 15:22:51.850126] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:51.201 [2024-11-20 15:22:51.850171] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:51.201 [2024-11-20 15:22:51.850189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.201 [2024-11-20 15:22:51.850202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:51.201 [2024-11-20 15:22:51.850215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.808 ms 00:29:51.201 [2024-11-20 15:22:51.850226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.201 [2024-11-20 15:22:51.881341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.201 [2024-11-20 15:22:51.881542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:51.201 [2024-11-20 15:22:51.881578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.112 ms 00:29:51.201 [2024-11-20 15:22:51.881591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.201 [2024-11-20 15:22:51.901139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.201 [2024-11-20 15:22:51.901206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:51.201 [2024-11-20 15:22:51.901225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.415 ms 00:29:51.201 [2024-11-20 15:22:51.901235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.201 [2024-11-20 15:22:51.919371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.201 [2024-11-20 15:22:51.919414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:51.201 [2024-11-20 15:22:51.919431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.115 ms 00:29:51.201 [2024-11-20 15:22:51.919442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.201 [2024-11-20 15:22:51.920298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.201 [2024-11-20 15:22:51.920332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:51.201 [2024-11-20 15:22:51.920346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.730 ms 00:29:51.201 [2024-11-20 15:22:51.920362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.201 [2024-11-20 15:22:52.017717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.201 [2024-11-20 15:22:52.018001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:51.201 [2024-11-20 15:22:52.018040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.485 ms 00:29:51.201 [2024-11-20 15:22:52.018052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.201 [2024-11-20 15:22:52.029614] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:51.460 [2024-11-20 15:22:52.034197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.460 [2024-11-20 15:22:52.034231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:51.460 [2024-11-20 15:22:52.034246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.043 ms 00:29:51.460 [2024-11-20 15:22:52.034258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.460 [2024-11-20 15:22:52.034390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.460 [2024-11-20 15:22:52.034404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:51.460 [2024-11-20 15:22:52.034417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:51.460 [2024-11-20 15:22:52.034432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.460 [2024-11-20 15:22:52.035841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.460 [2024-11-20 15:22:52.035978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:51.460 [2024-11-20 15:22:52.036001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.366 ms 00:29:51.460 [2024-11-20 15:22:52.036012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.460 [2024-11-20 15:22:52.036063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.460 [2024-11-20 15:22:52.036077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:51.460 [2024-11-20 15:22:52.036089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:51.460 [2024-11-20 15:22:52.036099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.460 [2024-11-20 15:22:52.036149] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:51.460 [2024-11-20 15:22:52.036163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.460 [2024-11-20 15:22:52.036174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:51.460 [2024-11-20 15:22:52.036186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:29:51.460 [2024-11-20 15:22:52.036197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.460 [2024-11-20 15:22:52.074456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.460 [2024-11-20 15:22:52.074505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:51.460 [2024-11-20 15:22:52.074522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.299 ms 00:29:51.460 [2024-11-20 15:22:52.074540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.460 [2024-11-20 15:22:52.074627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.460 [2024-11-20 15:22:52.074641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:51.460 [2024-11-20 15:22:52.074653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:29:51.460 [2024-11-20 15:22:52.074663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.460 [2024-11-20 15:22:52.076164] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 431.317 ms, result 0 00:29:52.837  [2024-11-20T15:22:54.618Z] Copying: 28/1024 [MB] (28 MBps) [2024-11-20T15:22:55.556Z] Copying: 56/1024 [MB] (28 MBps) [2024-11-20T15:22:56.492Z] Copying: 84/1024 [MB] (27 MBps) [2024-11-20T15:22:57.428Z] Copying: 112/1024 [MB] (27 MBps) [2024-11-20T15:22:58.363Z] Copying: 140/1024 [MB] (27 MBps) [2024-11-20T15:22:59.300Z] Copying: 167/1024 [MB] (27 MBps) [2024-11-20T15:23:00.697Z] Copying: 195/1024 [MB] (27 MBps) [2024-11-20T15:23:01.634Z] Copying: 223/1024 [MB] (27 MBps) [2024-11-20T15:23:02.573Z] Copying: 251/1024 [MB] (28 MBps) [2024-11-20T15:23:03.510Z] Copying: 279/1024 [MB] (27 MBps) [2024-11-20T15:23:04.447Z] Copying: 306/1024 [MB] (27 MBps) [2024-11-20T15:23:05.384Z] Copying: 334/1024 [MB] (27 MBps) [2024-11-20T15:23:06.321Z] Copying: 362/1024 [MB] (28 MBps) [2024-11-20T15:23:07.698Z] Copying: 389/1024 [MB] (27 MBps) [2024-11-20T15:23:08.635Z] Copying: 418/1024 [MB] (28 MBps) [2024-11-20T15:23:09.571Z] Copying: 446/1024 [MB] (28 MBps) [2024-11-20T15:23:10.508Z] Copying: 474/1024 [MB] (27 MBps) [2024-11-20T15:23:11.446Z] Copying: 502/1024 [MB] (28 MBps) [2024-11-20T15:23:12.382Z] Copying: 530/1024 [MB] (27 MBps) [2024-11-20T15:23:13.317Z] Copying: 558/1024 [MB] (28 MBps) [2024-11-20T15:23:14.693Z] Copying: 586/1024 [MB] (28 MBps) [2024-11-20T15:23:15.644Z] Copying: 615/1024 [MB] (28 MBps) [2024-11-20T15:23:16.580Z] Copying: 643/1024 [MB] (28 MBps) [2024-11-20T15:23:17.516Z] Copying: 672/1024 [MB] (28 MBps) [2024-11-20T15:23:18.451Z] Copying: 700/1024 [MB] (28 MBps) [2024-11-20T15:23:19.386Z] Copying: 729/1024 [MB] (28 MBps) [2024-11-20T15:23:20.323Z] Copying: 757/1024 [MB] (28 MBps) [2024-11-20T15:23:21.260Z] Copying: 785/1024 [MB] (28 MBps) [2024-11-20T15:23:22.637Z] Copying: 813/1024 [MB] (28 MBps) [2024-11-20T15:23:23.573Z] Copying: 842/1024 [MB] (28 MBps) [2024-11-20T15:23:24.510Z] Copying: 870/1024 [MB] (28 MBps) [2024-11-20T15:23:25.445Z] Copying: 899/1024 [MB] (28 MBps) [2024-11-20T15:23:26.379Z] Copying: 927/1024 [MB] (27 MBps) [2024-11-20T15:23:27.380Z] Copying: 955/1024 [MB] (28 MBps) [2024-11-20T15:23:28.322Z] Copying: 983/1024 [MB] (28 MBps) [2024-11-20T15:23:28.581Z] Copying: 1012/1024 [MB] (28 MBps) [2024-11-20T15:23:28.840Z] Copying: 1024/1024 [MB] (average 28 MBps)[2024-11-20 15:23:28.747535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.004 [2024-11-20 15:23:28.747667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:28.004 [2024-11-20 15:23:28.747702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:28.004 [2024-11-20 15:23:28.747751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.004 [2024-11-20 15:23:28.747804] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:28.004 [2024-11-20 15:23:28.757522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.004 [2024-11-20 15:23:28.757610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:28.004 [2024-11-20 15:23:28.757662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.690 ms 00:30:28.004 [2024-11-20 15:23:28.757698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.004 [2024-11-20 15:23:28.758217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.004 [2024-11-20 15:23:28.758520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:28.004 [2024-11-20 15:23:28.758586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.424 ms 00:30:28.004 [2024-11-20 15:23:28.758626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.004 [2024-11-20 15:23:28.762439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.004 [2024-11-20 15:23:28.762481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:28.004 [2024-11-20 15:23:28.762501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.743 ms 00:30:28.004 [2024-11-20 15:23:28.762519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.004 [2024-11-20 15:23:28.769385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.004 [2024-11-20 15:23:28.769554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:28.004 [2024-11-20 15:23:28.769612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.829 ms 00:30:28.004 [2024-11-20 15:23:28.769626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.004 [2024-11-20 15:23:28.810829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.005 [2024-11-20 15:23:28.810886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:28.005 [2024-11-20 15:23:28.810904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.132 ms 00:30:28.005 [2024-11-20 15:23:28.810915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.005 [2024-11-20 15:23:28.833384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.005 [2024-11-20 15:23:28.833444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:28.005 [2024-11-20 15:23:28.833463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.458 ms 00:30:28.005 [2024-11-20 15:23:28.833476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.005 [2024-11-20 15:23:28.835330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.005 [2024-11-20 15:23:28.835520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:28.005 [2024-11-20 15:23:28.835557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.803 ms 00:30:28.005 [2024-11-20 15:23:28.835580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.266 [2024-11-20 15:23:28.875823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.266 [2024-11-20 15:23:28.875894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:28.266 [2024-11-20 15:23:28.875913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.265 ms 00:30:28.266 [2024-11-20 15:23:28.875926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.266 [2024-11-20 15:23:28.915392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.266 [2024-11-20 15:23:28.915475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:28.266 [2024-11-20 15:23:28.915495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.476 ms 00:30:28.266 [2024-11-20 15:23:28.915523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.266 [2024-11-20 15:23:28.954432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.266 [2024-11-20 15:23:28.954490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:28.266 [2024-11-20 15:23:28.954508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.920 ms 00:30:28.266 [2024-11-20 15:23:28.954521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.266 [2024-11-20 15:23:28.993945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.266 [2024-11-20 15:23:28.994005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:28.266 [2024-11-20 15:23:28.994024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.372 ms 00:30:28.266 [2024-11-20 15:23:28.994035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.266 [2024-11-20 15:23:28.994079] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:28.266 [2024-11-20 15:23:28.994110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:28.266 [2024-11-20 15:23:28.994131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:30:28.266 [2024-11-20 15:23:28.994144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:28.266 [2024-11-20 15:23:28.994157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:28.266 [2024-11-20 15:23:28.994170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:28.266 [2024-11-20 15:23:28.994183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:28.266 [2024-11-20 15:23:28.994195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:28.266 [2024-11-20 15:23:28.994208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:28.266 [2024-11-20 15:23:28.994220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:28.266 [2024-11-20 15:23:28.994232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:28.266 [2024-11-20 15:23:28.994243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:28.266 [2024-11-20 15:23:28.994255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:28.266 [2024-11-20 15:23:28.994266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:28.266 [2024-11-20 15:23:28.994277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:28.266 [2024-11-20 15:23:28.994289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:28.266 [2024-11-20 15:23:28.994300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:28.266 [2024-11-20 15:23:28.994312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:28.266 [2024-11-20 15:23:28.994323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:28.266 [2024-11-20 15:23:28.994334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.994346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.994358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.994371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.994382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.994394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.994406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.994418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.994432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.994444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.994456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.994468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.994481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.994494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.994507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.994519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.994531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.994543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.994555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.994566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.994578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.994590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.994602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.994614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.994625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.994636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.994647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.994658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.994670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.994683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.994694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.994707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.994733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.994746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.994758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.994771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.994784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.994797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.994810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.994822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.994834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.994846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.994858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.994870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.994882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.994894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.994906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.994918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.994929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.994942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.994954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.994972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.994984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.994997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.995009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.995020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.995032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.995044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.995055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.995067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.995080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.995092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.995103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.995115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.995127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.995139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.995151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.995163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.995175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:28.267 [2024-11-20 15:23:28.995187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:28.268 [2024-11-20 15:23:28.995197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:28.268 [2024-11-20 15:23:28.995209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:28.268 [2024-11-20 15:23:28.995220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:28.268 [2024-11-20 15:23:28.995231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:28.268 [2024-11-20 15:23:28.995242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:28.268 [2024-11-20 15:23:28.995253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:28.268 [2024-11-20 15:23:28.995264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:28.268 [2024-11-20 15:23:28.995293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:28.268 [2024-11-20 15:23:28.995305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:28.268 [2024-11-20 15:23:28.995317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:28.268 [2024-11-20 15:23:28.995330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:28.268 [2024-11-20 15:23:28.995351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:28.268 [2024-11-20 15:23:28.995376] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:28.268 [2024-11-20 15:23:28.995393] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 449efaf6-bbc5-4a3f-99c9-acfa73fd2d6c 00:30:28.268 [2024-11-20 15:23:28.995407] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:30:28.268 [2024-11-20 15:23:28.995427] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:28.268 [2024-11-20 15:23:28.995444] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:28.268 [2024-11-20 15:23:28.995457] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:28.268 [2024-11-20 15:23:28.995467] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:28.268 [2024-11-20 15:23:28.995481] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:28.268 [2024-11-20 15:23:28.995517] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:28.268 [2024-11-20 15:23:28.995533] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:28.268 [2024-11-20 15:23:28.995553] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:28.268 [2024-11-20 15:23:28.995575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.268 [2024-11-20 15:23:28.995596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:28.268 [2024-11-20 15:23:28.995616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.498 ms 00:30:28.268 [2024-11-20 15:23:28.995634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.268 [2024-11-20 15:23:29.018321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.268 [2024-11-20 15:23:29.018372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:28.268 [2024-11-20 15:23:29.018389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.649 ms 00:30:28.268 [2024-11-20 15:23:29.018401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.268 [2024-11-20 15:23:29.019087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.268 [2024-11-20 15:23:29.019130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:28.268 [2024-11-20 15:23:29.019163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.652 ms 00:30:28.268 [2024-11-20 15:23:29.019185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.268 [2024-11-20 15:23:29.075997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:28.268 [2024-11-20 15:23:29.076256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:28.268 [2024-11-20 15:23:29.076293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:28.268 [2024-11-20 15:23:29.076308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.268 [2024-11-20 15:23:29.076438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:28.268 [2024-11-20 15:23:29.076460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:28.268 [2024-11-20 15:23:29.076491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:28.268 [2024-11-20 15:23:29.076506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.268 [2024-11-20 15:23:29.076603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:28.268 [2024-11-20 15:23:29.076621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:28.268 [2024-11-20 15:23:29.076638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:28.268 [2024-11-20 15:23:29.076653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.268 [2024-11-20 15:23:29.076678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:28.268 [2024-11-20 15:23:29.076693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:28.268 [2024-11-20 15:23:29.076708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:28.268 [2024-11-20 15:23:29.076745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.528 [2024-11-20 15:23:29.221617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:28.528 [2024-11-20 15:23:29.221710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:28.528 [2024-11-20 15:23:29.221744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:28.528 [2024-11-20 15:23:29.221757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.528 [2024-11-20 15:23:29.336683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:28.528 [2024-11-20 15:23:29.336788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:28.528 [2024-11-20 15:23:29.336816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:28.528 [2024-11-20 15:23:29.336829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.528 [2024-11-20 15:23:29.336957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:28.528 [2024-11-20 15:23:29.336971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:28.528 [2024-11-20 15:23:29.336984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:28.528 [2024-11-20 15:23:29.336996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.528 [2024-11-20 15:23:29.337054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:28.528 [2024-11-20 15:23:29.337068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:28.528 [2024-11-20 15:23:29.337080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:28.528 [2024-11-20 15:23:29.337092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.528 [2024-11-20 15:23:29.337269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:28.528 [2024-11-20 15:23:29.337288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:28.528 [2024-11-20 15:23:29.337305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:28.528 [2024-11-20 15:23:29.337324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.528 [2024-11-20 15:23:29.337392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:28.528 [2024-11-20 15:23:29.337417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:28.528 [2024-11-20 15:23:29.337437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:28.528 [2024-11-20 15:23:29.337449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.528 [2024-11-20 15:23:29.337529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:28.528 [2024-11-20 15:23:29.337571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:28.528 [2024-11-20 15:23:29.337594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:28.529 [2024-11-20 15:23:29.337612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.529 [2024-11-20 15:23:29.337700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:28.529 [2024-11-20 15:23:29.337740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:28.529 [2024-11-20 15:23:29.337763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:28.529 [2024-11-20 15:23:29.337783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.529 [2024-11-20 15:23:29.338014] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 591.390 ms, result 0 00:30:29.905 00:30:29.905 00:30:29.905 15:23:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:30:31.808 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:30:31.808 15:23:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:30:31.808 15:23:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:30:31.808 15:23:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:31.808 15:23:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:30:32.068 15:23:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:30:32.068 15:23:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:32.068 15:23:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:30:32.068 15:23:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 81303 00:30:32.068 15:23:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 81303 ']' 00:30:32.068 15:23:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 81303 00:30:32.068 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (81303) - No such process 00:30:32.068 Process with pid 81303 is not found 00:30:32.068 15:23:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 81303 is not found' 00:30:32.068 15:23:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:30:32.636 Remove shared memory files 00:30:32.636 15:23:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:30:32.636 15:23:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:32.636 15:23:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:30:32.636 15:23:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:30:32.636 15:23:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:30:32.636 15:23:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:32.636 15:23:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:30:32.636 ************************************ 00:30:32.636 END TEST ftl_dirty_shutdown 00:30:32.636 ************************************ 00:30:32.636 00:30:32.636 real 3m34.818s 00:30:32.636 user 4m1.888s 00:30:32.636 sys 0m40.144s 00:30:32.636 15:23:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:32.636 15:23:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:32.636 15:23:33 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:30:32.636 15:23:33 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:32.636 15:23:33 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:32.636 15:23:33 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:32.636 ************************************ 00:30:32.636 START TEST ftl_upgrade_shutdown 00:30:32.636 ************************************ 00:30:32.636 15:23:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:30:32.636 * Looking for test storage... 00:30:32.636 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:30:32.636 15:23:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:32.636 15:23:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:30:32.636 15:23:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:32.894 15:23:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:32.894 15:23:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:32.894 15:23:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:32.894 15:23:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:32.894 15:23:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:30:32.894 15:23:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:30:32.894 15:23:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:30:32.894 15:23:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:30:32.894 15:23:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:30:32.894 15:23:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:30:32.894 15:23:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:30:32.894 15:23:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:32.894 15:23:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:32.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:32.895 --rc genhtml_branch_coverage=1 00:30:32.895 --rc genhtml_function_coverage=1 00:30:32.895 --rc genhtml_legend=1 00:30:32.895 --rc geninfo_all_blocks=1 00:30:32.895 --rc geninfo_unexecuted_blocks=1 00:30:32.895 00:30:32.895 ' 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:32.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:32.895 --rc genhtml_branch_coverage=1 00:30:32.895 --rc genhtml_function_coverage=1 00:30:32.895 --rc genhtml_legend=1 00:30:32.895 --rc geninfo_all_blocks=1 00:30:32.895 --rc geninfo_unexecuted_blocks=1 00:30:32.895 00:30:32.895 ' 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:32.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:32.895 --rc genhtml_branch_coverage=1 00:30:32.895 --rc genhtml_function_coverage=1 00:30:32.895 --rc genhtml_legend=1 00:30:32.895 --rc geninfo_all_blocks=1 00:30:32.895 --rc geninfo_unexecuted_blocks=1 00:30:32.895 00:30:32.895 ' 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:32.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:32.895 --rc genhtml_branch_coverage=1 00:30:32.895 --rc genhtml_function_coverage=1 00:30:32.895 --rc genhtml_legend=1 00:30:32.895 --rc geninfo_all_blocks=1 00:30:32.895 --rc geninfo_unexecuted_blocks=1 00:30:32.895 00:30:32.895 ' 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83566 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83566 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83566 ']' 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:32.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:32.895 15:23:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:32.895 [2024-11-20 15:23:33.643017] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:30:32.895 [2024-11-20 15:23:33.643391] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83566 ] 00:30:33.154 [2024-11-20 15:23:33.831266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:33.154 [2024-11-20 15:23:33.985305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:34.529 15:23:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:34.529 15:23:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:34.529 15:23:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:34.529 15:23:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:30:34.529 15:23:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:30:34.529 15:23:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:34.529 15:23:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:30:34.529 15:23:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:34.529 15:23:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:30:34.529 15:23:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:34.529 15:23:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:30:34.529 15:23:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:34.529 15:23:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:30:34.529 15:23:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:34.529 15:23:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:30:34.529 15:23:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:34.529 15:23:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:30:34.529 15:23:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:30:34.529 15:23:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:30:34.529 15:23:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:30:34.529 15:23:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:30:34.530 15:23:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:30:34.530 15:23:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:30:34.788 15:23:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:30:34.788 15:23:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:30:34.788 15:23:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:30:34.788 15:23:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:30:34.788 15:23:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:34.788 15:23:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:30:34.788 15:23:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:30:34.788 15:23:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:30:35.047 15:23:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:35.047 { 00:30:35.047 "name": "basen1", 00:30:35.047 "aliases": [ 00:30:35.047 "56bb9a8b-6084-40dd-8aab-3931ad76761a" 00:30:35.047 ], 00:30:35.047 "product_name": "NVMe disk", 00:30:35.047 "block_size": 4096, 00:30:35.047 "num_blocks": 1310720, 00:30:35.047 "uuid": "56bb9a8b-6084-40dd-8aab-3931ad76761a", 00:30:35.047 "numa_id": -1, 00:30:35.047 "assigned_rate_limits": { 00:30:35.047 "rw_ios_per_sec": 0, 00:30:35.047 "rw_mbytes_per_sec": 0, 00:30:35.047 "r_mbytes_per_sec": 0, 00:30:35.047 "w_mbytes_per_sec": 0 00:30:35.047 }, 00:30:35.047 "claimed": true, 00:30:35.047 "claim_type": "read_many_write_one", 00:30:35.047 "zoned": false, 00:30:35.047 "supported_io_types": { 00:30:35.047 "read": true, 00:30:35.047 "write": true, 00:30:35.047 "unmap": true, 00:30:35.047 "flush": true, 00:30:35.047 "reset": true, 00:30:35.047 "nvme_admin": true, 00:30:35.047 "nvme_io": true, 00:30:35.047 "nvme_io_md": false, 00:30:35.047 "write_zeroes": true, 00:30:35.047 "zcopy": false, 00:30:35.047 "get_zone_info": false, 00:30:35.047 "zone_management": false, 00:30:35.047 "zone_append": false, 00:30:35.047 "compare": true, 00:30:35.047 "compare_and_write": false, 00:30:35.047 "abort": true, 00:30:35.047 "seek_hole": false, 00:30:35.047 "seek_data": false, 00:30:35.047 "copy": true, 00:30:35.047 "nvme_iov_md": false 00:30:35.047 }, 00:30:35.047 "driver_specific": { 00:30:35.047 "nvme": [ 00:30:35.047 { 00:30:35.047 "pci_address": "0000:00:11.0", 00:30:35.047 "trid": { 00:30:35.047 "trtype": "PCIe", 00:30:35.047 "traddr": "0000:00:11.0" 00:30:35.047 }, 00:30:35.047 "ctrlr_data": { 00:30:35.047 "cntlid": 0, 00:30:35.047 "vendor_id": "0x1b36", 00:30:35.047 "model_number": "QEMU NVMe Ctrl", 00:30:35.047 "serial_number": "12341", 00:30:35.047 "firmware_revision": "8.0.0", 00:30:35.047 "subnqn": "nqn.2019-08.org.qemu:12341", 00:30:35.047 "oacs": { 00:30:35.047 "security": 0, 00:30:35.047 "format": 1, 00:30:35.047 "firmware": 0, 00:30:35.047 "ns_manage": 1 00:30:35.047 }, 00:30:35.047 "multi_ctrlr": false, 00:30:35.047 "ana_reporting": false 00:30:35.047 }, 00:30:35.047 "vs": { 00:30:35.047 "nvme_version": "1.4" 00:30:35.047 }, 00:30:35.047 "ns_data": { 00:30:35.047 "id": 1, 00:30:35.047 "can_share": false 00:30:35.047 } 00:30:35.047 } 00:30:35.047 ], 00:30:35.047 "mp_policy": "active_passive" 00:30:35.047 } 00:30:35.047 } 00:30:35.047 ]' 00:30:35.047 15:23:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:35.047 15:23:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:30:35.047 15:23:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:35.047 15:23:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:30:35.047 15:23:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:30:35.047 15:23:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:30:35.047 15:23:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:30:35.047 15:23:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:30:35.047 15:23:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:30:35.047 15:23:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:30:35.047 15:23:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:35.306 15:23:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=5d8663cd-5546-4903-b95c-c784edb91d06 00:30:35.306 15:23:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:30:35.306 15:23:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5d8663cd-5546-4903-b95c-c784edb91d06 00:30:35.564 15:23:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:30:35.822 15:23:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=f011d099-4fc5-468c-a01e-93b727a94600 00:30:35.822 15:23:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u f011d099-4fc5-468c-a01e-93b727a94600 00:30:36.080 15:23:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=4c4d20a7-7ac1-449c-b161-abf81373d34e 00:30:36.080 15:23:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 4c4d20a7-7ac1-449c-b161-abf81373d34e ]] 00:30:36.080 15:23:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 4c4d20a7-7ac1-449c-b161-abf81373d34e 5120 00:30:36.080 15:23:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:30:36.080 15:23:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:30:36.080 15:23:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=4c4d20a7-7ac1-449c-b161-abf81373d34e 00:30:36.080 15:23:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:30:36.080 15:23:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 4c4d20a7-7ac1-449c-b161-abf81373d34e 00:30:36.080 15:23:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=4c4d20a7-7ac1-449c-b161-abf81373d34e 00:30:36.080 15:23:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:36.080 15:23:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:30:36.080 15:23:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:30:36.080 15:23:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4c4d20a7-7ac1-449c-b161-abf81373d34e 00:30:36.339 15:23:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:36.339 { 00:30:36.339 "name": "4c4d20a7-7ac1-449c-b161-abf81373d34e", 00:30:36.339 "aliases": [ 00:30:36.339 "lvs/basen1p0" 00:30:36.339 ], 00:30:36.339 "product_name": "Logical Volume", 00:30:36.339 "block_size": 4096, 00:30:36.339 "num_blocks": 5242880, 00:30:36.339 "uuid": "4c4d20a7-7ac1-449c-b161-abf81373d34e", 00:30:36.339 "assigned_rate_limits": { 00:30:36.339 "rw_ios_per_sec": 0, 00:30:36.339 "rw_mbytes_per_sec": 0, 00:30:36.339 "r_mbytes_per_sec": 0, 00:30:36.339 "w_mbytes_per_sec": 0 00:30:36.339 }, 00:30:36.339 "claimed": false, 00:30:36.339 "zoned": false, 00:30:36.339 "supported_io_types": { 00:30:36.339 "read": true, 00:30:36.339 "write": true, 00:30:36.339 "unmap": true, 00:30:36.339 "flush": false, 00:30:36.339 "reset": true, 00:30:36.339 "nvme_admin": false, 00:30:36.339 "nvme_io": false, 00:30:36.339 "nvme_io_md": false, 00:30:36.339 "write_zeroes": true, 00:30:36.339 "zcopy": false, 00:30:36.339 "get_zone_info": false, 00:30:36.339 "zone_management": false, 00:30:36.339 "zone_append": false, 00:30:36.339 "compare": false, 00:30:36.339 "compare_and_write": false, 00:30:36.339 "abort": false, 00:30:36.339 "seek_hole": true, 00:30:36.339 "seek_data": true, 00:30:36.339 "copy": false, 00:30:36.339 "nvme_iov_md": false 00:30:36.339 }, 00:30:36.339 "driver_specific": { 00:30:36.339 "lvol": { 00:30:36.339 "lvol_store_uuid": "f011d099-4fc5-468c-a01e-93b727a94600", 00:30:36.339 "base_bdev": "basen1", 00:30:36.339 "thin_provision": true, 00:30:36.339 "num_allocated_clusters": 0, 00:30:36.339 "snapshot": false, 00:30:36.339 "clone": false, 00:30:36.339 "esnap_clone": false 00:30:36.339 } 00:30:36.339 } 00:30:36.339 } 00:30:36.339 ]' 00:30:36.339 15:23:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:36.339 15:23:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:30:36.339 15:23:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:36.598 15:23:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:30:36.598 15:23:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:30:36.598 15:23:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:30:36.598 15:23:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:30:36.598 15:23:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:30:36.598 15:23:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:30:36.916 15:23:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:30:36.916 15:23:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:30:36.916 15:23:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:30:37.175 15:23:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:30:37.175 15:23:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:30:37.175 15:23:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 4c4d20a7-7ac1-449c-b161-abf81373d34e -c cachen1p0 --l2p_dram_limit 2 00:30:37.435 [2024-11-20 15:23:38.072992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.435 [2024-11-20 15:23:38.073295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:37.435 [2024-11-20 15:23:38.073415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:37.435 [2024-11-20 15:23:38.073455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.435 [2024-11-20 15:23:38.073610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.435 [2024-11-20 15:23:38.073749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:37.435 [2024-11-20 15:23:38.073788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.093 ms 00:30:37.435 [2024-11-20 15:23:38.073871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.435 [2024-11-20 15:23:38.073939] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:37.435 [2024-11-20 15:23:38.075123] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:37.435 [2024-11-20 15:23:38.075283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.435 [2024-11-20 15:23:38.075360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:37.435 [2024-11-20 15:23:38.075402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.352 ms 00:30:37.435 [2024-11-20 15:23:38.075519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.435 [2024-11-20 15:23:38.075641] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID b41d7cb8-ec9f-4951-83df-c2fa2ffb9b57 00:30:37.435 [2024-11-20 15:23:38.078214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.435 [2024-11-20 15:23:38.078353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:30:37.435 [2024-11-20 15:23:38.078373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:30:37.435 [2024-11-20 15:23:38.078388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.435 [2024-11-20 15:23:38.092753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.435 [2024-11-20 15:23:38.092892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:37.435 [2024-11-20 15:23:38.093021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.299 ms 00:30:37.435 [2024-11-20 15:23:38.093064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.435 [2024-11-20 15:23:38.093146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.435 [2024-11-20 15:23:38.093186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:37.435 [2024-11-20 15:23:38.093262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:30:37.435 [2024-11-20 15:23:38.093307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.435 [2024-11-20 15:23:38.093426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.435 [2024-11-20 15:23:38.093470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:37.435 [2024-11-20 15:23:38.093621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:30:37.435 [2024-11-20 15:23:38.093664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.435 [2024-11-20 15:23:38.093785] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:37.435 [2024-11-20 15:23:38.100343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.435 [2024-11-20 15:23:38.100472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:37.435 [2024-11-20 15:23:38.100546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.576 ms 00:30:37.435 [2024-11-20 15:23:38.100582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.435 [2024-11-20 15:23:38.100642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.435 [2024-11-20 15:23:38.100675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:37.435 [2024-11-20 15:23:38.100709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:30:37.435 [2024-11-20 15:23:38.100759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.435 [2024-11-20 15:23:38.100831] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:30:37.435 [2024-11-20 15:23:38.101104] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:37.435 [2024-11-20 15:23:38.101168] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:37.435 [2024-11-20 15:23:38.101223] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:37.435 [2024-11-20 15:23:38.101336] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:37.435 [2024-11-20 15:23:38.101393] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:37.435 [2024-11-20 15:23:38.101446] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:37.435 [2024-11-20 15:23:38.101476] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:37.435 [2024-11-20 15:23:38.101578] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:37.435 [2024-11-20 15:23:38.101594] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:37.435 [2024-11-20 15:23:38.101609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.435 [2024-11-20 15:23:38.101621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:37.435 [2024-11-20 15:23:38.101636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.783 ms 00:30:37.435 [2024-11-20 15:23:38.101647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.435 [2024-11-20 15:23:38.101745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.435 [2024-11-20 15:23:38.101758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:37.435 [2024-11-20 15:23:38.101772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.066 ms 00:30:37.435 [2024-11-20 15:23:38.101795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.435 [2024-11-20 15:23:38.101903] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:37.435 [2024-11-20 15:23:38.101918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:37.435 [2024-11-20 15:23:38.101933] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:37.435 [2024-11-20 15:23:38.101944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:37.435 [2024-11-20 15:23:38.101958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:37.435 [2024-11-20 15:23:38.101968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:37.435 [2024-11-20 15:23:38.101981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:37.435 [2024-11-20 15:23:38.101991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:37.435 [2024-11-20 15:23:38.102003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:37.435 [2024-11-20 15:23:38.102013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:37.435 [2024-11-20 15:23:38.102025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:37.435 [2024-11-20 15:23:38.102035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:37.435 [2024-11-20 15:23:38.102047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:37.435 [2024-11-20 15:23:38.102057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:37.435 [2024-11-20 15:23:38.102069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:37.435 [2024-11-20 15:23:38.102078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:37.435 [2024-11-20 15:23:38.102094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:37.435 [2024-11-20 15:23:38.102103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:37.435 [2024-11-20 15:23:38.102115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:37.435 [2024-11-20 15:23:38.102125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:37.435 [2024-11-20 15:23:38.102138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:37.435 [2024-11-20 15:23:38.102149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:37.435 [2024-11-20 15:23:38.102161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:37.435 [2024-11-20 15:23:38.102170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:37.435 [2024-11-20 15:23:38.102183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:37.435 [2024-11-20 15:23:38.102192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:37.435 [2024-11-20 15:23:38.102204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:37.435 [2024-11-20 15:23:38.102213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:37.435 [2024-11-20 15:23:38.102226] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:37.435 [2024-11-20 15:23:38.102236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:37.435 [2024-11-20 15:23:38.102248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:37.435 [2024-11-20 15:23:38.102257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:37.435 [2024-11-20 15:23:38.102272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:37.435 [2024-11-20 15:23:38.102282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:37.435 [2024-11-20 15:23:38.102294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:37.436 [2024-11-20 15:23:38.102304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:37.436 [2024-11-20 15:23:38.102316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:37.436 [2024-11-20 15:23:38.102325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:37.436 [2024-11-20 15:23:38.102337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:37.436 [2024-11-20 15:23:38.102346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:37.436 [2024-11-20 15:23:38.102358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:37.436 [2024-11-20 15:23:38.102367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:37.436 [2024-11-20 15:23:38.102379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:37.436 [2024-11-20 15:23:38.102388] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:37.436 [2024-11-20 15:23:38.102403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:37.436 [2024-11-20 15:23:38.102414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:37.436 [2024-11-20 15:23:38.102426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:37.436 [2024-11-20 15:23:38.102436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:37.436 [2024-11-20 15:23:38.102453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:37.436 [2024-11-20 15:23:38.102462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:37.436 [2024-11-20 15:23:38.102475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:37.436 [2024-11-20 15:23:38.102484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:37.436 [2024-11-20 15:23:38.102496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:37.436 [2024-11-20 15:23:38.102513] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:37.436 [2024-11-20 15:23:38.102530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:37.436 [2024-11-20 15:23:38.102546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:37.436 [2024-11-20 15:23:38.102560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:37.436 [2024-11-20 15:23:38.102570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:37.436 [2024-11-20 15:23:38.102584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:37.436 [2024-11-20 15:23:38.102595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:37.436 [2024-11-20 15:23:38.102609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:37.436 [2024-11-20 15:23:38.102620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:37.436 [2024-11-20 15:23:38.102633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:37.436 [2024-11-20 15:23:38.102644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:37.436 [2024-11-20 15:23:38.102659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:37.436 [2024-11-20 15:23:38.102670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:37.436 [2024-11-20 15:23:38.102684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:37.436 [2024-11-20 15:23:38.102695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:37.436 [2024-11-20 15:23:38.102708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:37.436 [2024-11-20 15:23:38.102728] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:37.436 [2024-11-20 15:23:38.102744] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:37.436 [2024-11-20 15:23:38.102756] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:37.436 [2024-11-20 15:23:38.102770] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:37.436 [2024-11-20 15:23:38.102780] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:37.436 [2024-11-20 15:23:38.102793] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:37.436 [2024-11-20 15:23:38.102804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.436 [2024-11-20 15:23:38.102819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:37.436 [2024-11-20 15:23:38.102830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.963 ms 00:30:37.436 [2024-11-20 15:23:38.102843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.436 [2024-11-20 15:23:38.102891] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:30:37.436 [2024-11-20 15:23:38.102911] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:30:40.726 [2024-11-20 15:23:40.903804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.726 [2024-11-20 15:23:40.904094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:30:40.726 [2024-11-20 15:23:40.904189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2805.456 ms 00:30:40.726 [2024-11-20 15:23:40.904233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.726 [2024-11-20 15:23:40.952239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.726 [2024-11-20 15:23:40.952513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:40.726 [2024-11-20 15:23:40.952616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 47.734 ms 00:30:40.726 [2024-11-20 15:23:40.952660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.726 [2024-11-20 15:23:40.952850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.726 [2024-11-20 15:23:40.952994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:40.726 [2024-11-20 15:23:40.953085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:30:40.726 [2024-11-20 15:23:40.953129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.726 [2024-11-20 15:23:41.007836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.726 [2024-11-20 15:23:41.008042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:40.726 [2024-11-20 15:23:41.008128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 54.714 ms 00:30:40.726 [2024-11-20 15:23:41.008173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.726 [2024-11-20 15:23:41.008304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.726 [2024-11-20 15:23:41.008351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:40.726 [2024-11-20 15:23:41.008385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:40.726 [2024-11-20 15:23:41.008492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.726 [2024-11-20 15:23:41.009364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.726 [2024-11-20 15:23:41.009491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:40.726 [2024-11-20 15:23:41.009582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.725 ms 00:30:40.726 [2024-11-20 15:23:41.009623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.726 [2024-11-20 15:23:41.009854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.726 [2024-11-20 15:23:41.009897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:40.726 [2024-11-20 15:23:41.010037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:30:40.726 [2024-11-20 15:23:41.010083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.726 [2024-11-20 15:23:41.035539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.726 [2024-11-20 15:23:41.035693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:40.726 [2024-11-20 15:23:41.035882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.446 ms 00:30:40.726 [2024-11-20 15:23:41.035926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.726 [2024-11-20 15:23:41.061687] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:40.726 [2024-11-20 15:23:41.063364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.726 [2024-11-20 15:23:41.063493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:40.726 [2024-11-20 15:23:41.063521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.294 ms 00:30:40.726 [2024-11-20 15:23:41.063534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.726 [2024-11-20 15:23:41.097542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.726 [2024-11-20 15:23:41.097738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:30:40.726 [2024-11-20 15:23:41.097771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.019 ms 00:30:40.726 [2024-11-20 15:23:41.097784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.726 [2024-11-20 15:23:41.097935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.726 [2024-11-20 15:23:41.097955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:40.726 [2024-11-20 15:23:41.097974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.064 ms 00:30:40.726 [2024-11-20 15:23:41.097986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.726 [2024-11-20 15:23:41.134288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.726 [2024-11-20 15:23:41.134327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:30:40.726 [2024-11-20 15:23:41.134346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.294 ms 00:30:40.726 [2024-11-20 15:23:41.134358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.726 [2024-11-20 15:23:41.169541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.726 [2024-11-20 15:23:41.169588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:30:40.727 [2024-11-20 15:23:41.169607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.191 ms 00:30:40.727 [2024-11-20 15:23:41.169617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.727 [2024-11-20 15:23:41.170356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.727 [2024-11-20 15:23:41.170396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:40.727 [2024-11-20 15:23:41.170411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.698 ms 00:30:40.727 [2024-11-20 15:23:41.170426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.727 [2024-11-20 15:23:41.271007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.727 [2024-11-20 15:23:41.271071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:30:40.727 [2024-11-20 15:23:41.271099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 100.677 ms 00:30:40.727 [2024-11-20 15:23:41.271112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.727 [2024-11-20 15:23:41.310325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.727 [2024-11-20 15:23:41.310382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:30:40.727 [2024-11-20 15:23:41.310418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 39.176 ms 00:30:40.727 [2024-11-20 15:23:41.310430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.727 [2024-11-20 15:23:41.349110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.727 [2024-11-20 15:23:41.349279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:30:40.727 [2024-11-20 15:23:41.349311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.689 ms 00:30:40.727 [2024-11-20 15:23:41.349323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.727 [2024-11-20 15:23:41.388755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.727 [2024-11-20 15:23:41.388832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:30:40.727 [2024-11-20 15:23:41.388856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 39.413 ms 00:30:40.727 [2024-11-20 15:23:41.388868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.727 [2024-11-20 15:23:41.388944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.727 [2024-11-20 15:23:41.388959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:40.727 [2024-11-20 15:23:41.388979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:30:40.727 [2024-11-20 15:23:41.388990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.727 [2024-11-20 15:23:41.389128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.727 [2024-11-20 15:23:41.389142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:40.727 [2024-11-20 15:23:41.389161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:30:40.727 [2024-11-20 15:23:41.389172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.727 [2024-11-20 15:23:41.390618] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3322.469 ms, result 0 00:30:40.727 { 00:30:40.727 "name": "ftl", 00:30:40.727 "uuid": "b41d7cb8-ec9f-4951-83df-c2fa2ffb9b57" 00:30:40.727 } 00:30:40.727 15:23:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:30:40.986 [2024-11-20 15:23:41.617119] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:40.986 15:23:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:30:41.245 15:23:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:30:41.245 [2024-11-20 15:23:42.033158] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:41.245 15:23:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:30:41.505 [2024-11-20 15:23:42.255744] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:41.505 15:23:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:30:42.073 15:23:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:30:42.073 15:23:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:30:42.073 15:23:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:30:42.073 15:23:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:30:42.073 15:23:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:30:42.073 15:23:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:30:42.073 15:23:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:30:42.073 15:23:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:30:42.073 15:23:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:30:42.073 15:23:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:42.073 15:23:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:30:42.073 Fill FTL, iteration 1 00:30:42.073 15:23:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:30:42.073 15:23:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:42.073 15:23:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:42.073 15:23:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:42.073 15:23:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:30:42.073 15:23:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=83694 00:30:42.073 15:23:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:30:42.073 15:23:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:30:42.073 15:23:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 83694 /var/tmp/spdk.tgt.sock 00:30:42.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:30:42.073 15:23:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83694 ']' 00:30:42.073 15:23:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:30:42.073 15:23:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:42.073 15:23:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:30:42.073 15:23:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:42.073 15:23:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:42.073 [2024-11-20 15:23:42.732164] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:30:42.073 [2024-11-20 15:23:42.732317] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83694 ] 00:30:42.331 [2024-11-20 15:23:42.921733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:42.331 [2024-11-20 15:23:43.066511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:43.704 15:23:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:43.705 15:23:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:43.705 15:23:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:30:43.705 ftln1 00:30:43.705 15:23:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:30:43.705 15:23:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:30:43.964 15:23:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:30:43.964 15:23:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 83694 00:30:43.964 15:23:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83694 ']' 00:30:43.964 15:23:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83694 00:30:43.964 15:23:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:43.964 15:23:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:43.964 15:23:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83694 00:30:43.964 killing process with pid 83694 00:30:43.964 15:23:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:43.964 15:23:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:43.964 15:23:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83694' 00:30:43.964 15:23:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83694 00:30:43.964 15:23:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83694 00:30:47.244 15:23:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:30:47.244 15:23:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:30:47.244 [2024-11-20 15:23:47.497102] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:30:47.244 [2024-11-20 15:23:47.497246] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83749 ] 00:30:47.244 [2024-11-20 15:23:47.693110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:47.244 [2024-11-20 15:23:47.853349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:48.619  [2024-11-20T15:23:50.834Z] Copying: 227/1024 [MB] (227 MBps) [2024-11-20T15:23:51.771Z] Copying: 465/1024 [MB] (238 MBps) [2024-11-20T15:23:52.706Z] Copying: 682/1024 [MB] (217 MBps) [2024-11-20T15:23:52.966Z] Copying: 908/1024 [MB] (226 MBps) [2024-11-20T15:23:54.345Z] Copying: 1024/1024 [MB] (average 227 MBps) 00:30:53.509 00:30:53.509 Calculate MD5 checksum, iteration 1 00:30:53.509 15:23:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:30:53.509 15:23:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:30:53.509 15:23:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:53.509 15:23:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:53.509 15:23:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:53.509 15:23:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:53.509 15:23:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:53.509 15:23:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:53.779 [2024-11-20 15:23:54.346310] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:30:53.779 [2024-11-20 15:23:54.346463] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83822 ] 00:30:53.779 [2024-11-20 15:23:54.532109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:54.046 [2024-11-20 15:23:54.677042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:55.423  [2024-11-20T15:23:56.827Z] Copying: 667/1024 [MB] (667 MBps) [2024-11-20T15:23:58.206Z] Copying: 1024/1024 [MB] (average 653 MBps) 00:30:57.370 00:30:57.370 15:23:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:30:57.370 15:23:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:59.308 15:23:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:30:59.308 15:23:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=f9cadea94c78487220245e1a0e2d9df0 00:30:59.308 15:23:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:30:59.308 15:23:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:59.308 Fill FTL, iteration 2 00:30:59.308 15:23:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:30:59.308 15:23:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:30:59.308 15:23:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:59.308 15:23:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:59.308 15:23:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:59.308 15:23:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:59.308 15:23:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:30:59.308 [2024-11-20 15:23:59.735458] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:30:59.308 [2024-11-20 15:23:59.735601] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83879 ] 00:30:59.308 [2024-11-20 15:23:59.919800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:59.308 [2024-11-20 15:24:00.075125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:01.212  [2024-11-20T15:24:02.985Z] Copying: 232/1024 [MB] (232 MBps) [2024-11-20T15:24:03.922Z] Copying: 470/1024 [MB] (238 MBps) [2024-11-20T15:24:04.933Z] Copying: 713/1024 [MB] (243 MBps) [2024-11-20T15:24:04.933Z] Copying: 956/1024 [MB] (243 MBps) [2024-11-20T15:24:06.312Z] Copying: 1024/1024 [MB] (average 239 MBps) 00:31:05.476 00:31:05.476 Calculate MD5 checksum, iteration 2 00:31:05.477 15:24:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:31:05.477 15:24:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:31:05.477 15:24:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:05.477 15:24:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:05.477 15:24:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:05.477 15:24:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:05.477 15:24:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:05.477 15:24:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:05.736 [2024-11-20 15:24:06.329771] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:31:05.736 [2024-11-20 15:24:06.329926] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83949 ] 00:31:05.736 [2024-11-20 15:24:06.514730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:05.996 [2024-11-20 15:24:06.660389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:07.903  [2024-11-20T15:24:08.999Z] Copying: 654/1024 [MB] (654 MBps) [2024-11-20T15:24:10.956Z] Copying: 1024/1024 [MB] (average 655 MBps) 00:31:10.120 00:31:10.120 15:24:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:31:10.120 15:24:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:11.496 15:24:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:31:11.497 15:24:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=f9025fedb70a4ae76087bcbe1093300a 00:31:11.497 15:24:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:31:11.497 15:24:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:31:11.497 15:24:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:31:11.755 [2024-11-20 15:24:12.483020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:11.756 [2024-11-20 15:24:12.483102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:11.756 [2024-11-20 15:24:12.483122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:31:11.756 [2024-11-20 15:24:12.483134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:11.756 [2024-11-20 15:24:12.483164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:11.756 [2024-11-20 15:24:12.483177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:11.756 [2024-11-20 15:24:12.483194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:11.756 [2024-11-20 15:24:12.483205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:11.756 [2024-11-20 15:24:12.483227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:11.756 [2024-11-20 15:24:12.483239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:11.756 [2024-11-20 15:24:12.483250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:11.756 [2024-11-20 15:24:12.483260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:11.756 [2024-11-20 15:24:12.483331] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.308 ms, result 0 00:31:11.756 true 00:31:11.756 15:24:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:12.015 { 00:31:12.015 "name": "ftl", 00:31:12.015 "properties": [ 00:31:12.015 { 00:31:12.015 "name": "superblock_version", 00:31:12.015 "value": 5, 00:31:12.015 "read-only": true 00:31:12.015 }, 00:31:12.015 { 00:31:12.015 "name": "base_device", 00:31:12.015 "bands": [ 00:31:12.015 { 00:31:12.015 "id": 0, 00:31:12.015 "state": "FREE", 00:31:12.015 "validity": 0.0 00:31:12.015 }, 00:31:12.015 { 00:31:12.015 "id": 1, 00:31:12.015 "state": "FREE", 00:31:12.015 "validity": 0.0 00:31:12.015 }, 00:31:12.015 { 00:31:12.015 "id": 2, 00:31:12.015 "state": "FREE", 00:31:12.015 "validity": 0.0 00:31:12.015 }, 00:31:12.015 { 00:31:12.015 "id": 3, 00:31:12.015 "state": "FREE", 00:31:12.015 "validity": 0.0 00:31:12.015 }, 00:31:12.015 { 00:31:12.015 "id": 4, 00:31:12.015 "state": "FREE", 00:31:12.015 "validity": 0.0 00:31:12.015 }, 00:31:12.015 { 00:31:12.015 "id": 5, 00:31:12.015 "state": "FREE", 00:31:12.015 "validity": 0.0 00:31:12.015 }, 00:31:12.015 { 00:31:12.015 "id": 6, 00:31:12.015 "state": "FREE", 00:31:12.015 "validity": 0.0 00:31:12.015 }, 00:31:12.015 { 00:31:12.015 "id": 7, 00:31:12.015 "state": "FREE", 00:31:12.015 "validity": 0.0 00:31:12.015 }, 00:31:12.015 { 00:31:12.015 "id": 8, 00:31:12.015 "state": "FREE", 00:31:12.015 "validity": 0.0 00:31:12.015 }, 00:31:12.015 { 00:31:12.015 "id": 9, 00:31:12.015 "state": "FREE", 00:31:12.015 "validity": 0.0 00:31:12.015 }, 00:31:12.015 { 00:31:12.015 "id": 10, 00:31:12.015 "state": "FREE", 00:31:12.015 "validity": 0.0 00:31:12.015 }, 00:31:12.015 { 00:31:12.015 "id": 11, 00:31:12.015 "state": "FREE", 00:31:12.015 "validity": 0.0 00:31:12.015 }, 00:31:12.015 { 00:31:12.015 "id": 12, 00:31:12.015 "state": "FREE", 00:31:12.015 "validity": 0.0 00:31:12.015 }, 00:31:12.015 { 00:31:12.015 "id": 13, 00:31:12.015 "state": "FREE", 00:31:12.015 "validity": 0.0 00:31:12.015 }, 00:31:12.015 { 00:31:12.015 "id": 14, 00:31:12.015 "state": "FREE", 00:31:12.015 "validity": 0.0 00:31:12.015 }, 00:31:12.015 { 00:31:12.015 "id": 15, 00:31:12.015 "state": "FREE", 00:31:12.015 "validity": 0.0 00:31:12.015 }, 00:31:12.015 { 00:31:12.015 "id": 16, 00:31:12.015 "state": "FREE", 00:31:12.015 "validity": 0.0 00:31:12.015 }, 00:31:12.015 { 00:31:12.015 "id": 17, 00:31:12.015 "state": "FREE", 00:31:12.015 "validity": 0.0 00:31:12.015 } 00:31:12.015 ], 00:31:12.015 "read-only": true 00:31:12.015 }, 00:31:12.015 { 00:31:12.015 "name": "cache_device", 00:31:12.015 "type": "bdev", 00:31:12.015 "chunks": [ 00:31:12.015 { 00:31:12.015 "id": 0, 00:31:12.015 "state": "INACTIVE", 00:31:12.015 "utilization": 0.0 00:31:12.015 }, 00:31:12.015 { 00:31:12.015 "id": 1, 00:31:12.015 "state": "CLOSED", 00:31:12.015 "utilization": 1.0 00:31:12.015 }, 00:31:12.015 { 00:31:12.015 "id": 2, 00:31:12.015 "state": "CLOSED", 00:31:12.015 "utilization": 1.0 00:31:12.015 }, 00:31:12.015 { 00:31:12.015 "id": 3, 00:31:12.015 "state": "OPEN", 00:31:12.016 "utilization": 0.001953125 00:31:12.016 }, 00:31:12.016 { 00:31:12.016 "id": 4, 00:31:12.016 "state": "OPEN", 00:31:12.016 "utilization": 0.0 00:31:12.016 } 00:31:12.016 ], 00:31:12.016 "read-only": true 00:31:12.016 }, 00:31:12.016 { 00:31:12.016 "name": "verbose_mode", 00:31:12.016 "value": true, 00:31:12.016 "unit": "", 00:31:12.016 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:31:12.016 }, 00:31:12.016 { 00:31:12.016 "name": "prep_upgrade_on_shutdown", 00:31:12.016 "value": false, 00:31:12.016 "unit": "", 00:31:12.016 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:31:12.016 } 00:31:12.016 ] 00:31:12.016 } 00:31:12.016 15:24:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:31:12.275 [2024-11-20 15:24:12.962999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.275 [2024-11-20 15:24:12.963070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:12.275 [2024-11-20 15:24:12.963089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:31:12.275 [2024-11-20 15:24:12.963100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.275 [2024-11-20 15:24:12.963129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.275 [2024-11-20 15:24:12.963141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:12.275 [2024-11-20 15:24:12.963153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:12.275 [2024-11-20 15:24:12.963164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.275 [2024-11-20 15:24:12.963186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.275 [2024-11-20 15:24:12.963197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:12.275 [2024-11-20 15:24:12.963208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:12.275 [2024-11-20 15:24:12.963218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.275 [2024-11-20 15:24:12.963286] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.281 ms, result 0 00:31:12.275 true 00:31:12.275 15:24:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:31:12.275 15:24:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:12.275 15:24:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:31:12.533 15:24:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:31:12.533 15:24:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:31:12.533 15:24:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:31:12.792 [2024-11-20 15:24:13.430992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.792 [2024-11-20 15:24:13.431086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:12.792 [2024-11-20 15:24:13.431106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:31:12.792 [2024-11-20 15:24:13.431117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.792 [2024-11-20 15:24:13.431149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.792 [2024-11-20 15:24:13.431162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:12.792 [2024-11-20 15:24:13.431173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:12.792 [2024-11-20 15:24:13.431184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.792 [2024-11-20 15:24:13.431208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.792 [2024-11-20 15:24:13.431219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:12.792 [2024-11-20 15:24:13.431231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:12.792 [2024-11-20 15:24:13.431241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.792 [2024-11-20 15:24:13.431310] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.317 ms, result 0 00:31:12.792 true 00:31:12.792 15:24:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:13.052 { 00:31:13.052 "name": "ftl", 00:31:13.052 "properties": [ 00:31:13.052 { 00:31:13.052 "name": "superblock_version", 00:31:13.052 "value": 5, 00:31:13.052 "read-only": true 00:31:13.052 }, 00:31:13.052 { 00:31:13.052 "name": "base_device", 00:31:13.052 "bands": [ 00:31:13.052 { 00:31:13.052 "id": 0, 00:31:13.052 "state": "FREE", 00:31:13.052 "validity": 0.0 00:31:13.052 }, 00:31:13.052 { 00:31:13.052 "id": 1, 00:31:13.052 "state": "FREE", 00:31:13.052 "validity": 0.0 00:31:13.052 }, 00:31:13.052 { 00:31:13.052 "id": 2, 00:31:13.052 "state": "FREE", 00:31:13.052 "validity": 0.0 00:31:13.052 }, 00:31:13.052 { 00:31:13.052 "id": 3, 00:31:13.052 "state": "FREE", 00:31:13.052 "validity": 0.0 00:31:13.052 }, 00:31:13.052 { 00:31:13.052 "id": 4, 00:31:13.052 "state": "FREE", 00:31:13.052 "validity": 0.0 00:31:13.052 }, 00:31:13.052 { 00:31:13.052 "id": 5, 00:31:13.052 "state": "FREE", 00:31:13.052 "validity": 0.0 00:31:13.052 }, 00:31:13.052 { 00:31:13.052 "id": 6, 00:31:13.052 "state": "FREE", 00:31:13.052 "validity": 0.0 00:31:13.052 }, 00:31:13.052 { 00:31:13.052 "id": 7, 00:31:13.052 "state": "FREE", 00:31:13.052 "validity": 0.0 00:31:13.052 }, 00:31:13.052 { 00:31:13.052 "id": 8, 00:31:13.052 "state": "FREE", 00:31:13.052 "validity": 0.0 00:31:13.052 }, 00:31:13.052 { 00:31:13.052 "id": 9, 00:31:13.052 "state": "FREE", 00:31:13.052 "validity": 0.0 00:31:13.052 }, 00:31:13.052 { 00:31:13.052 "id": 10, 00:31:13.052 "state": "FREE", 00:31:13.052 "validity": 0.0 00:31:13.052 }, 00:31:13.052 { 00:31:13.052 "id": 11, 00:31:13.052 "state": "FREE", 00:31:13.052 "validity": 0.0 00:31:13.052 }, 00:31:13.052 { 00:31:13.052 "id": 12, 00:31:13.052 "state": "FREE", 00:31:13.052 "validity": 0.0 00:31:13.052 }, 00:31:13.052 { 00:31:13.052 "id": 13, 00:31:13.052 "state": "FREE", 00:31:13.052 "validity": 0.0 00:31:13.052 }, 00:31:13.052 { 00:31:13.052 "id": 14, 00:31:13.052 "state": "FREE", 00:31:13.052 "validity": 0.0 00:31:13.052 }, 00:31:13.052 { 00:31:13.052 "id": 15, 00:31:13.052 "state": "FREE", 00:31:13.052 "validity": 0.0 00:31:13.052 }, 00:31:13.052 { 00:31:13.052 "id": 16, 00:31:13.052 "state": "FREE", 00:31:13.052 "validity": 0.0 00:31:13.052 }, 00:31:13.052 { 00:31:13.052 "id": 17, 00:31:13.052 "state": "FREE", 00:31:13.052 "validity": 0.0 00:31:13.052 } 00:31:13.052 ], 00:31:13.052 "read-only": true 00:31:13.052 }, 00:31:13.052 { 00:31:13.052 "name": "cache_device", 00:31:13.052 "type": "bdev", 00:31:13.052 "chunks": [ 00:31:13.052 { 00:31:13.052 "id": 0, 00:31:13.052 "state": "INACTIVE", 00:31:13.052 "utilization": 0.0 00:31:13.052 }, 00:31:13.052 { 00:31:13.052 "id": 1, 00:31:13.052 "state": "CLOSED", 00:31:13.052 "utilization": 1.0 00:31:13.052 }, 00:31:13.052 { 00:31:13.052 "id": 2, 00:31:13.052 "state": "CLOSED", 00:31:13.052 "utilization": 1.0 00:31:13.052 }, 00:31:13.052 { 00:31:13.052 "id": 3, 00:31:13.052 "state": "OPEN", 00:31:13.052 "utilization": 0.001953125 00:31:13.052 }, 00:31:13.052 { 00:31:13.052 "id": 4, 00:31:13.052 "state": "OPEN", 00:31:13.052 "utilization": 0.0 00:31:13.052 } 00:31:13.052 ], 00:31:13.052 "read-only": true 00:31:13.052 }, 00:31:13.052 { 00:31:13.052 "name": "verbose_mode", 00:31:13.052 "value": true, 00:31:13.052 "unit": "", 00:31:13.052 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:31:13.052 }, 00:31:13.052 { 00:31:13.052 "name": "prep_upgrade_on_shutdown", 00:31:13.052 "value": true, 00:31:13.052 "unit": "", 00:31:13.052 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:31:13.052 } 00:31:13.052 ] 00:31:13.052 } 00:31:13.052 15:24:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:31:13.052 15:24:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83566 ]] 00:31:13.052 15:24:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83566 00:31:13.052 15:24:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83566 ']' 00:31:13.052 15:24:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83566 00:31:13.052 15:24:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:31:13.052 15:24:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:13.052 15:24:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83566 00:31:13.052 killing process with pid 83566 00:31:13.052 15:24:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:13.052 15:24:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:13.053 15:24:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83566' 00:31:13.053 15:24:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83566 00:31:13.053 15:24:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83566 00:31:14.427 [2024-11-20 15:24:14.948797] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:31:14.427 [2024-11-20 15:24:14.970315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:14.427 [2024-11-20 15:24:14.970364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:31:14.427 [2024-11-20 15:24:14.970381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:14.427 [2024-11-20 15:24:14.970392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:14.427 [2024-11-20 15:24:14.970417] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:31:14.427 [2024-11-20 15:24:14.975161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:14.427 [2024-11-20 15:24:14.975201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:31:14.427 [2024-11-20 15:24:14.975216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.733 ms 00:31:14.427 [2024-11-20 15:24:14.975235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:22.551 [2024-11-20 15:24:22.098663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:22.551 [2024-11-20 15:24:22.098768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:31:22.551 [2024-11-20 15:24:22.098791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7134.945 ms 00:31:22.551 [2024-11-20 15:24:22.098812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:22.551 [2024-11-20 15:24:22.099947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:22.551 [2024-11-20 15:24:22.099983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:31:22.551 [2024-11-20 15:24:22.099999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.114 ms 00:31:22.551 [2024-11-20 15:24:22.100013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:22.551 [2024-11-20 15:24:22.100991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:22.551 [2024-11-20 15:24:22.101028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:31:22.551 [2024-11-20 15:24:22.101054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.942 ms 00:31:22.551 [2024-11-20 15:24:22.101074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:22.551 [2024-11-20 15:24:22.117323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:22.551 [2024-11-20 15:24:22.117381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:31:22.551 [2024-11-20 15:24:22.117397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.216 ms 00:31:22.551 [2024-11-20 15:24:22.117409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:22.551 [2024-11-20 15:24:22.127086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:22.551 [2024-11-20 15:24:22.127135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:31:22.551 [2024-11-20 15:24:22.127151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.649 ms 00:31:22.551 [2024-11-20 15:24:22.127162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:22.551 [2024-11-20 15:24:22.127255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:22.551 [2024-11-20 15:24:22.127270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:31:22.551 [2024-11-20 15:24:22.127289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:31:22.551 [2024-11-20 15:24:22.127300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:22.551 [2024-11-20 15:24:22.142258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:22.551 [2024-11-20 15:24:22.142304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:31:22.551 [2024-11-20 15:24:22.142320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.962 ms 00:31:22.551 [2024-11-20 15:24:22.142332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:22.551 [2024-11-20 15:24:22.157462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:22.551 [2024-11-20 15:24:22.157517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:31:22.551 [2024-11-20 15:24:22.157533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.114 ms 00:31:22.551 [2024-11-20 15:24:22.157550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:22.551 [2024-11-20 15:24:22.172634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:22.551 [2024-11-20 15:24:22.172685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:31:22.551 [2024-11-20 15:24:22.172700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.062 ms 00:31:22.551 [2024-11-20 15:24:22.172710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:22.551 [2024-11-20 15:24:22.187619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:22.551 [2024-11-20 15:24:22.187674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:31:22.551 [2024-11-20 15:24:22.187689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.824 ms 00:31:22.551 [2024-11-20 15:24:22.187700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:22.551 [2024-11-20 15:24:22.187746] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:31:22.551 [2024-11-20 15:24:22.187773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:22.551 [2024-11-20 15:24:22.187795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:31:22.551 [2024-11-20 15:24:22.187835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:31:22.551 [2024-11-20 15:24:22.187848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:22.551 [2024-11-20 15:24:22.187860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:22.551 [2024-11-20 15:24:22.187872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:22.551 [2024-11-20 15:24:22.187883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:22.551 [2024-11-20 15:24:22.187897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:22.551 [2024-11-20 15:24:22.187908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:22.551 [2024-11-20 15:24:22.187919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:22.551 [2024-11-20 15:24:22.187930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:22.551 [2024-11-20 15:24:22.187941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:22.551 [2024-11-20 15:24:22.187952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:22.551 [2024-11-20 15:24:22.187962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:22.551 [2024-11-20 15:24:22.187972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:22.551 [2024-11-20 15:24:22.187982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:22.551 [2024-11-20 15:24:22.187993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:22.552 [2024-11-20 15:24:22.188003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:22.552 [2024-11-20 15:24:22.188018] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:31:22.552 [2024-11-20 15:24:22.188028] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: b41d7cb8-ec9f-4951-83df-c2fa2ffb9b57 00:31:22.552 [2024-11-20 15:24:22.188039] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:31:22.552 [2024-11-20 15:24:22.188050] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:31:22.552 [2024-11-20 15:24:22.188061] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:31:22.552 [2024-11-20 15:24:22.188074] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:31:22.552 [2024-11-20 15:24:22.188085] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:31:22.552 [2024-11-20 15:24:22.188102] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:31:22.552 [2024-11-20 15:24:22.188113] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:31:22.552 [2024-11-20 15:24:22.188122] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:31:22.552 [2024-11-20 15:24:22.188131] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:31:22.552 [2024-11-20 15:24:22.188143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:22.552 [2024-11-20 15:24:22.188160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:31:22.552 [2024-11-20 15:24:22.188172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.399 ms 00:31:22.552 [2024-11-20 15:24:22.188183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:22.552 [2024-11-20 15:24:22.210106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:22.552 [2024-11-20 15:24:22.210156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:31:22.552 [2024-11-20 15:24:22.210172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.920 ms 00:31:22.552 [2024-11-20 15:24:22.210190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:22.552 [2024-11-20 15:24:22.210848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:22.552 [2024-11-20 15:24:22.210866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:31:22.552 [2024-11-20 15:24:22.210879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.626 ms 00:31:22.552 [2024-11-20 15:24:22.210890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:22.552 [2024-11-20 15:24:22.282475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:22.552 [2024-11-20 15:24:22.282553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:22.552 [2024-11-20 15:24:22.282577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:22.552 [2024-11-20 15:24:22.282589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:22.552 [2024-11-20 15:24:22.282652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:22.552 [2024-11-20 15:24:22.282663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:22.552 [2024-11-20 15:24:22.282674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:22.552 [2024-11-20 15:24:22.282685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:22.552 [2024-11-20 15:24:22.282811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:22.552 [2024-11-20 15:24:22.282826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:22.552 [2024-11-20 15:24:22.282838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:22.552 [2024-11-20 15:24:22.282855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:22.552 [2024-11-20 15:24:22.282876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:22.552 [2024-11-20 15:24:22.282888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:22.552 [2024-11-20 15:24:22.282898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:22.552 [2024-11-20 15:24:22.282919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:22.552 [2024-11-20 15:24:22.424379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:22.552 [2024-11-20 15:24:22.424461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:22.552 [2024-11-20 15:24:22.424489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:22.552 [2024-11-20 15:24:22.424501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:22.552 [2024-11-20 15:24:22.535688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:22.552 [2024-11-20 15:24:22.535801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:22.552 [2024-11-20 15:24:22.535819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:22.552 [2024-11-20 15:24:22.535831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:22.552 [2024-11-20 15:24:22.535984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:22.552 [2024-11-20 15:24:22.535997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:22.552 [2024-11-20 15:24:22.536009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:22.552 [2024-11-20 15:24:22.536021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:22.552 [2024-11-20 15:24:22.536107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:22.552 [2024-11-20 15:24:22.536120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:22.552 [2024-11-20 15:24:22.536131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:22.552 [2024-11-20 15:24:22.536142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:22.552 [2024-11-20 15:24:22.536280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:22.552 [2024-11-20 15:24:22.536294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:22.552 [2024-11-20 15:24:22.536306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:22.552 [2024-11-20 15:24:22.536317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:22.552 [2024-11-20 15:24:22.536357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:22.552 [2024-11-20 15:24:22.536375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:31:22.552 [2024-11-20 15:24:22.536386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:22.552 [2024-11-20 15:24:22.536396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:22.552 [2024-11-20 15:24:22.536449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:22.552 [2024-11-20 15:24:22.536461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:22.552 [2024-11-20 15:24:22.536472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:22.552 [2024-11-20 15:24:22.536482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:22.552 [2024-11-20 15:24:22.536539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:22.552 [2024-11-20 15:24:22.536552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:22.552 [2024-11-20 15:24:22.536564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:22.552 [2024-11-20 15:24:22.536575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:22.552 [2024-11-20 15:24:22.536734] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7578.657 ms, result 0 00:31:25.843 15:24:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:31:25.843 15:24:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:31:25.843 15:24:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:31:25.843 15:24:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:31:25.843 15:24:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:25.843 15:24:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84150 00:31:25.843 15:24:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:31:25.843 15:24:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:25.843 15:24:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84150 00:31:25.843 15:24:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84150 ']' 00:31:25.843 15:24:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:25.843 15:24:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:25.843 15:24:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:25.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:25.843 15:24:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:25.843 15:24:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:25.843 [2024-11-20 15:24:26.351685] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:31:25.843 [2024-11-20 15:24:26.351843] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84150 ] 00:31:25.843 [2024-11-20 15:24:26.541292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:26.102 [2024-11-20 15:24:26.679868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:27.038 [2024-11-20 15:24:27.813222] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:27.038 [2024-11-20 15:24:27.813313] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:27.299 [2024-11-20 15:24:27.962346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:27.299 [2024-11-20 15:24:27.962412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:31:27.299 [2024-11-20 15:24:27.962430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:31:27.299 [2024-11-20 15:24:27.962443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:27.299 [2024-11-20 15:24:27.962527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:27.299 [2024-11-20 15:24:27.962542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:27.299 [2024-11-20 15:24:27.962554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.061 ms 00:31:27.299 [2024-11-20 15:24:27.962565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:27.299 [2024-11-20 15:24:27.962591] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:31:27.299 [2024-11-20 15:24:27.963631] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:31:27.299 [2024-11-20 15:24:27.963664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:27.299 [2024-11-20 15:24:27.963676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:27.299 [2024-11-20 15:24:27.963688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.080 ms 00:31:27.299 [2024-11-20 15:24:27.963700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:27.299 [2024-11-20 15:24:27.966143] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:31:27.299 [2024-11-20 15:24:27.986929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:27.299 [2024-11-20 15:24:27.986982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:31:27.299 [2024-11-20 15:24:27.987006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.819 ms 00:31:27.299 [2024-11-20 15:24:27.987019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:27.299 [2024-11-20 15:24:27.987105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:27.299 [2024-11-20 15:24:27.987119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:31:27.299 [2024-11-20 15:24:27.987131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:31:27.299 [2024-11-20 15:24:27.987142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:27.299 [2024-11-20 15:24:27.999527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:27.299 [2024-11-20 15:24:27.999582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:27.299 [2024-11-20 15:24:27.999598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.304 ms 00:31:27.299 [2024-11-20 15:24:27.999610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:27.299 [2024-11-20 15:24:27.999712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:27.299 [2024-11-20 15:24:27.999750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:27.299 [2024-11-20 15:24:27.999762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.067 ms 00:31:27.299 [2024-11-20 15:24:27.999773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:27.299 [2024-11-20 15:24:27.999864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:27.299 [2024-11-20 15:24:27.999877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:31:27.299 [2024-11-20 15:24:27.999894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:31:27.299 [2024-11-20 15:24:27.999904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:27.299 [2024-11-20 15:24:27.999939] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:31:27.300 [2024-11-20 15:24:28.005820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:27.300 [2024-11-20 15:24:28.005856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:27.300 [2024-11-20 15:24:28.005869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.900 ms 00:31:27.300 [2024-11-20 15:24:28.005886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:27.300 [2024-11-20 15:24:28.005920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:27.300 [2024-11-20 15:24:28.005932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:31:27.300 [2024-11-20 15:24:28.005943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:31:27.300 [2024-11-20 15:24:28.005953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:27.300 [2024-11-20 15:24:28.006004] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:31:27.300 [2024-11-20 15:24:28.006032] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:31:27.300 [2024-11-20 15:24:28.006076] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:31:27.300 [2024-11-20 15:24:28.006097] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:31:27.300 [2024-11-20 15:24:28.006191] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:31:27.300 [2024-11-20 15:24:28.006206] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:31:27.300 [2024-11-20 15:24:28.006219] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:31:27.300 [2024-11-20 15:24:28.006234] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:31:27.300 [2024-11-20 15:24:28.006247] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:31:27.300 [2024-11-20 15:24:28.006263] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:31:27.300 [2024-11-20 15:24:28.006274] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:31:27.300 [2024-11-20 15:24:28.006285] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:31:27.300 [2024-11-20 15:24:28.006295] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:31:27.300 [2024-11-20 15:24:28.006307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:27.300 [2024-11-20 15:24:28.006317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:31:27.300 [2024-11-20 15:24:28.006329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.309 ms 00:31:27.300 [2024-11-20 15:24:28.006339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:27.300 [2024-11-20 15:24:28.006415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:27.300 [2024-11-20 15:24:28.006432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:31:27.300 [2024-11-20 15:24:28.006444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:31:27.300 [2024-11-20 15:24:28.006460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:27.300 [2024-11-20 15:24:28.006560] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:31:27.300 [2024-11-20 15:24:28.006580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:31:27.300 [2024-11-20 15:24:28.006591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:27.300 [2024-11-20 15:24:28.006602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:27.300 [2024-11-20 15:24:28.006614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:31:27.300 [2024-11-20 15:24:28.006623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:31:27.300 [2024-11-20 15:24:28.006633] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:31:27.300 [2024-11-20 15:24:28.006642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:31:27.300 [2024-11-20 15:24:28.006652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:31:27.300 [2024-11-20 15:24:28.006662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:27.300 [2024-11-20 15:24:28.006678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:31:27.300 [2024-11-20 15:24:28.006688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:31:27.300 [2024-11-20 15:24:28.006697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:27.300 [2024-11-20 15:24:28.006706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:31:27.300 [2024-11-20 15:24:28.006747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:31:27.300 [2024-11-20 15:24:28.006758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:27.300 [2024-11-20 15:24:28.006768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:31:27.300 [2024-11-20 15:24:28.006777] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:31:27.300 [2024-11-20 15:24:28.006787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:27.300 [2024-11-20 15:24:28.006797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:31:27.300 [2024-11-20 15:24:28.006807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:31:27.300 [2024-11-20 15:24:28.006816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:27.300 [2024-11-20 15:24:28.006826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:31:27.300 [2024-11-20 15:24:28.006836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:31:27.300 [2024-11-20 15:24:28.006846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:27.300 [2024-11-20 15:24:28.006869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:31:27.300 [2024-11-20 15:24:28.006879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:31:27.300 [2024-11-20 15:24:28.006888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:27.300 [2024-11-20 15:24:28.006898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:31:27.300 [2024-11-20 15:24:28.006908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:31:27.300 [2024-11-20 15:24:28.006917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:27.300 [2024-11-20 15:24:28.006927] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:31:27.300 [2024-11-20 15:24:28.006937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:31:27.300 [2024-11-20 15:24:28.006947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:27.300 [2024-11-20 15:24:28.006956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:31:27.300 [2024-11-20 15:24:28.006966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:31:27.300 [2024-11-20 15:24:28.006975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:27.300 [2024-11-20 15:24:28.006985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:31:27.300 [2024-11-20 15:24:28.006995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:31:27.300 [2024-11-20 15:24:28.007003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:27.300 [2024-11-20 15:24:28.007013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:31:27.300 [2024-11-20 15:24:28.007022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:31:27.300 [2024-11-20 15:24:28.007034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:27.300 [2024-11-20 15:24:28.007044] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:31:27.300 [2024-11-20 15:24:28.007055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:31:27.300 [2024-11-20 15:24:28.007065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:27.300 [2024-11-20 15:24:28.007075] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:27.300 [2024-11-20 15:24:28.007090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:31:27.300 [2024-11-20 15:24:28.007101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:31:27.300 [2024-11-20 15:24:28.007110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:31:27.300 [2024-11-20 15:24:28.007120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:31:27.300 [2024-11-20 15:24:28.007130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:31:27.300 [2024-11-20 15:24:28.007140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:31:27.300 [2024-11-20 15:24:28.007152] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:31:27.300 [2024-11-20 15:24:28.007166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:27.300 [2024-11-20 15:24:28.007178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:31:27.300 [2024-11-20 15:24:28.007189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:31:27.300 [2024-11-20 15:24:28.007200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:31:27.300 [2024-11-20 15:24:28.007212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:31:27.300 [2024-11-20 15:24:28.007223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:31:27.300 [2024-11-20 15:24:28.007234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:31:27.300 [2024-11-20 15:24:28.007244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:31:27.300 [2024-11-20 15:24:28.007255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:31:27.300 [2024-11-20 15:24:28.007266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:31:27.300 [2024-11-20 15:24:28.007277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:31:27.300 [2024-11-20 15:24:28.007288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:31:27.300 [2024-11-20 15:24:28.007299] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:31:27.300 [2024-11-20 15:24:28.007309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:31:27.300 [2024-11-20 15:24:28.007320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:31:27.300 [2024-11-20 15:24:28.007332] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:31:27.300 [2024-11-20 15:24:28.007344] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:27.301 [2024-11-20 15:24:28.007355] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:27.301 [2024-11-20 15:24:28.007365] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:31:27.301 [2024-11-20 15:24:28.007375] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:31:27.301 [2024-11-20 15:24:28.007388] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:31:27.301 [2024-11-20 15:24:28.007400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:27.301 [2024-11-20 15:24:28.007412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:31:27.301 [2024-11-20 15:24:28.007423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.898 ms 00:31:27.301 [2024-11-20 15:24:28.007433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:27.301 [2024-11-20 15:24:28.007489] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:31:27.301 [2024-11-20 15:24:28.007503] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:31:29.841 [2024-11-20 15:24:30.544413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:29.841 [2024-11-20 15:24:30.544527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:31:29.841 [2024-11-20 15:24:30.544549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2541.042 ms 00:31:29.841 [2024-11-20 15:24:30.544562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:29.841 [2024-11-20 15:24:30.593911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:29.841 [2024-11-20 15:24:30.593990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:29.841 [2024-11-20 15:24:30.594008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 48.932 ms 00:31:29.841 [2024-11-20 15:24:30.594021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:29.841 [2024-11-20 15:24:30.594193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:29.841 [2024-11-20 15:24:30.594213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:31:29.841 [2024-11-20 15:24:30.594226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:31:29.841 [2024-11-20 15:24:30.594237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:29.841 [2024-11-20 15:24:30.649257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:29.841 [2024-11-20 15:24:30.649332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:29.841 [2024-11-20 15:24:30.649349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 55.029 ms 00:31:29.841 [2024-11-20 15:24:30.649366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:29.841 [2024-11-20 15:24:30.649460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:29.841 [2024-11-20 15:24:30.649471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:29.841 [2024-11-20 15:24:30.649484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:31:29.841 [2024-11-20 15:24:30.649494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:29.841 [2024-11-20 15:24:30.650343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:29.841 [2024-11-20 15:24:30.650368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:29.841 [2024-11-20 15:24:30.650380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.754 ms 00:31:29.841 [2024-11-20 15:24:30.650391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:29.841 [2024-11-20 15:24:30.650454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:29.841 [2024-11-20 15:24:30.650470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:29.841 [2024-11-20 15:24:30.650482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:31:29.841 [2024-11-20 15:24:30.650492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.101 [2024-11-20 15:24:30.676682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.101 [2024-11-20 15:24:30.676765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:30.101 [2024-11-20 15:24:30.676783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.206 ms 00:31:30.101 [2024-11-20 15:24:30.676796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.101 [2024-11-20 15:24:30.711623] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:31:30.101 [2024-11-20 15:24:30.711700] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:31:30.101 [2024-11-20 15:24:30.711730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.101 [2024-11-20 15:24:30.711743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:31:30.101 [2024-11-20 15:24:30.711758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.783 ms 00:31:30.101 [2024-11-20 15:24:30.711770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.101 [2024-11-20 15:24:30.733909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.101 [2024-11-20 15:24:30.733982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:31:30.101 [2024-11-20 15:24:30.734002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.085 ms 00:31:30.101 [2024-11-20 15:24:30.734014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.101 [2024-11-20 15:24:30.753798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.101 [2024-11-20 15:24:30.753887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:31:30.101 [2024-11-20 15:24:30.753906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.734 ms 00:31:30.101 [2024-11-20 15:24:30.753919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.102 [2024-11-20 15:24:30.773799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.102 [2024-11-20 15:24:30.773868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:31:30.102 [2024-11-20 15:24:30.773886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.846 ms 00:31:30.102 [2024-11-20 15:24:30.773897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.102 [2024-11-20 15:24:30.774864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.102 [2024-11-20 15:24:30.774905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:31:30.102 [2024-11-20 15:24:30.774920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.791 ms 00:31:30.102 [2024-11-20 15:24:30.774932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.102 [2024-11-20 15:24:30.874911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.102 [2024-11-20 15:24:30.875007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:31:30.102 [2024-11-20 15:24:30.875029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 100.099 ms 00:31:30.102 [2024-11-20 15:24:30.875042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.102 [2024-11-20 15:24:30.887910] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:31:30.102 [2024-11-20 15:24:30.889620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.102 [2024-11-20 15:24:30.889654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:31:30.102 [2024-11-20 15:24:30.889671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.512 ms 00:31:30.102 [2024-11-20 15:24:30.889682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.102 [2024-11-20 15:24:30.889840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.102 [2024-11-20 15:24:30.889861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:31:30.102 [2024-11-20 15:24:30.889873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:31:30.102 [2024-11-20 15:24:30.889884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.102 [2024-11-20 15:24:30.889962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.102 [2024-11-20 15:24:30.889980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:31:30.102 [2024-11-20 15:24:30.889993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:31:30.102 [2024-11-20 15:24:30.890004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.102 [2024-11-20 15:24:30.890036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.102 [2024-11-20 15:24:30.890048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:31:30.102 [2024-11-20 15:24:30.890065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:31:30.102 [2024-11-20 15:24:30.890075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.102 [2024-11-20 15:24:30.890117] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:31:30.102 [2024-11-20 15:24:30.890130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.102 [2024-11-20 15:24:30.890141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:31:30.102 [2024-11-20 15:24:30.890152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:31:30.102 [2024-11-20 15:24:30.890163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.102 [2024-11-20 15:24:30.929238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.102 [2024-11-20 15:24:30.929314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:31:30.102 [2024-11-20 15:24:30.929332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 39.107 ms 00:31:30.102 [2024-11-20 15:24:30.929344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.102 [2024-11-20 15:24:30.929456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.102 [2024-11-20 15:24:30.929469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:31:30.102 [2024-11-20 15:24:30.929482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:31:30.102 [2024-11-20 15:24:30.929493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.102 [2024-11-20 15:24:30.931173] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2973.069 ms, result 0 00:31:30.360 [2024-11-20 15:24:30.945675] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:30.360 [2024-11-20 15:24:30.961741] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:31:30.360 [2024-11-20 15:24:30.972599] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:30.360 15:24:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:30.360 15:24:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:31:30.360 15:24:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:30.360 15:24:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:31:30.360 15:24:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:31:30.619 [2024-11-20 15:24:31.208223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.620 [2024-11-20 15:24:31.208302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:30.620 [2024-11-20 15:24:31.208321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:31:30.620 [2024-11-20 15:24:31.208337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.620 [2024-11-20 15:24:31.208371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.620 [2024-11-20 15:24:31.208383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:30.620 [2024-11-20 15:24:31.208395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:30.620 [2024-11-20 15:24:31.208405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.620 [2024-11-20 15:24:31.208428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.620 [2024-11-20 15:24:31.208439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:30.620 [2024-11-20 15:24:31.208451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:30.620 [2024-11-20 15:24:31.208461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.620 [2024-11-20 15:24:31.208534] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.308 ms, result 0 00:31:30.620 true 00:31:30.620 15:24:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:30.620 { 00:31:30.620 "name": "ftl", 00:31:30.620 "properties": [ 00:31:30.620 { 00:31:30.620 "name": "superblock_version", 00:31:30.620 "value": 5, 00:31:30.620 "read-only": true 00:31:30.620 }, 00:31:30.620 { 00:31:30.620 "name": "base_device", 00:31:30.620 "bands": [ 00:31:30.620 { 00:31:30.620 "id": 0, 00:31:30.620 "state": "CLOSED", 00:31:30.620 "validity": 1.0 00:31:30.620 }, 00:31:30.620 { 00:31:30.620 "id": 1, 00:31:30.620 "state": "CLOSED", 00:31:30.620 "validity": 1.0 00:31:30.620 }, 00:31:30.620 { 00:31:30.620 "id": 2, 00:31:30.620 "state": "CLOSED", 00:31:30.620 "validity": 0.007843137254901933 00:31:30.620 }, 00:31:30.620 { 00:31:30.620 "id": 3, 00:31:30.620 "state": "FREE", 00:31:30.620 "validity": 0.0 00:31:30.620 }, 00:31:30.620 { 00:31:30.620 "id": 4, 00:31:30.620 "state": "FREE", 00:31:30.620 "validity": 0.0 00:31:30.620 }, 00:31:30.620 { 00:31:30.620 "id": 5, 00:31:30.620 "state": "FREE", 00:31:30.620 "validity": 0.0 00:31:30.620 }, 00:31:30.620 { 00:31:30.620 "id": 6, 00:31:30.620 "state": "FREE", 00:31:30.620 "validity": 0.0 00:31:30.620 }, 00:31:30.620 { 00:31:30.620 "id": 7, 00:31:30.620 "state": "FREE", 00:31:30.620 "validity": 0.0 00:31:30.620 }, 00:31:30.620 { 00:31:30.620 "id": 8, 00:31:30.620 "state": "FREE", 00:31:30.620 "validity": 0.0 00:31:30.620 }, 00:31:30.620 { 00:31:30.620 "id": 9, 00:31:30.620 "state": "FREE", 00:31:30.620 "validity": 0.0 00:31:30.620 }, 00:31:30.620 { 00:31:30.620 "id": 10, 00:31:30.620 "state": "FREE", 00:31:30.620 "validity": 0.0 00:31:30.620 }, 00:31:30.620 { 00:31:30.620 "id": 11, 00:31:30.620 "state": "FREE", 00:31:30.620 "validity": 0.0 00:31:30.620 }, 00:31:30.620 { 00:31:30.620 "id": 12, 00:31:30.620 "state": "FREE", 00:31:30.620 "validity": 0.0 00:31:30.620 }, 00:31:30.620 { 00:31:30.620 "id": 13, 00:31:30.620 "state": "FREE", 00:31:30.620 "validity": 0.0 00:31:30.620 }, 00:31:30.620 { 00:31:30.620 "id": 14, 00:31:30.620 "state": "FREE", 00:31:30.620 "validity": 0.0 00:31:30.620 }, 00:31:30.620 { 00:31:30.620 "id": 15, 00:31:30.620 "state": "FREE", 00:31:30.620 "validity": 0.0 00:31:30.620 }, 00:31:30.620 { 00:31:30.620 "id": 16, 00:31:30.620 "state": "FREE", 00:31:30.620 "validity": 0.0 00:31:30.620 }, 00:31:30.620 { 00:31:30.620 "id": 17, 00:31:30.620 "state": "FREE", 00:31:30.620 "validity": 0.0 00:31:30.620 } 00:31:30.620 ], 00:31:30.620 "read-only": true 00:31:30.620 }, 00:31:30.620 { 00:31:30.620 "name": "cache_device", 00:31:30.620 "type": "bdev", 00:31:30.620 "chunks": [ 00:31:30.620 { 00:31:30.620 "id": 0, 00:31:30.620 "state": "INACTIVE", 00:31:30.620 "utilization": 0.0 00:31:30.620 }, 00:31:30.620 { 00:31:30.620 "id": 1, 00:31:30.620 "state": "OPEN", 00:31:30.620 "utilization": 0.0 00:31:30.620 }, 00:31:30.620 { 00:31:30.620 "id": 2, 00:31:30.620 "state": "OPEN", 00:31:30.620 "utilization": 0.0 00:31:30.620 }, 00:31:30.620 { 00:31:30.620 "id": 3, 00:31:30.620 "state": "FREE", 00:31:30.620 "utilization": 0.0 00:31:30.620 }, 00:31:30.620 { 00:31:30.620 "id": 4, 00:31:30.620 "state": "FREE", 00:31:30.620 "utilization": 0.0 00:31:30.620 } 00:31:30.620 ], 00:31:30.620 "read-only": true 00:31:30.620 }, 00:31:30.620 { 00:31:30.620 "name": "verbose_mode", 00:31:30.620 "value": true, 00:31:30.620 "unit": "", 00:31:30.620 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:31:30.620 }, 00:31:30.620 { 00:31:30.620 "name": "prep_upgrade_on_shutdown", 00:31:30.620 "value": false, 00:31:30.620 "unit": "", 00:31:30.620 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:31:30.620 } 00:31:30.620 ] 00:31:30.620 } 00:31:30.620 15:24:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:31:30.620 15:24:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:31:30.620 15:24:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:30.879 15:24:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:31:30.879 15:24:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:31:30.879 15:24:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:31:30.879 15:24:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:30.879 15:24:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:31:31.138 15:24:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:31:31.138 15:24:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:31:31.138 15:24:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:31:31.138 15:24:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:31:31.138 15:24:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:31:31.138 15:24:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:31.138 Validate MD5 checksum, iteration 1 00:31:31.138 15:24:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:31:31.138 15:24:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:31.138 15:24:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:31.138 15:24:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:31.138 15:24:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:31.138 15:24:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:31.138 15:24:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:31.397 [2024-11-20 15:24:32.023610] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:31:31.397 [2024-11-20 15:24:32.023783] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84236 ] 00:31:31.397 [2024-11-20 15:24:32.210528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:31.655 [2024-11-20 15:24:32.355092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:33.561  [2024-11-20T15:24:34.656Z] Copying: 677/1024 [MB] (677 MBps) [2024-11-20T15:24:36.560Z] Copying: 1024/1024 [MB] (average 660 MBps) 00:31:35.724 00:31:35.724 15:24:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:31:35.724 15:24:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:37.625 15:24:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:37.625 Validate MD5 checksum, iteration 2 00:31:37.625 15:24:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=f9cadea94c78487220245e1a0e2d9df0 00:31:37.625 15:24:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ f9cadea94c78487220245e1a0e2d9df0 != \f\9\c\a\d\e\a\9\4\c\7\8\4\8\7\2\2\0\2\4\5\e\1\a\0\e\2\d\9\d\f\0 ]] 00:31:37.625 15:24:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:37.626 15:24:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:37.626 15:24:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:31:37.626 15:24:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:37.626 15:24:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:37.626 15:24:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:37.626 15:24:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:37.626 15:24:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:37.626 15:24:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:37.626 [2024-11-20 15:24:38.256261] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:31:37.626 [2024-11-20 15:24:38.256391] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84303 ] 00:31:37.626 [2024-11-20 15:24:38.444290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:37.885 [2024-11-20 15:24:38.587929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:39.791  [2024-11-20T15:24:41.197Z] Copying: 646/1024 [MB] (646 MBps) [2024-11-20T15:24:42.589Z] Copying: 1024/1024 [MB] (average 650 MBps) 00:31:41.753 00:31:41.753 15:24:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:31:41.753 15:24:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:43.660 15:24:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:43.660 15:24:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=f9025fedb70a4ae76087bcbe1093300a 00:31:43.661 15:24:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ f9025fedb70a4ae76087bcbe1093300a != \f\9\0\2\5\f\e\d\b\7\0\a\4\a\e\7\6\0\8\7\b\c\b\e\1\0\9\3\3\0\0\a ]] 00:31:43.661 15:24:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:43.661 15:24:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:43.661 15:24:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:31:43.661 15:24:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 84150 ]] 00:31:43.661 15:24:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 84150 00:31:43.661 15:24:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:31:43.661 15:24:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:31:43.661 15:24:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:31:43.661 15:24:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:31:43.661 15:24:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:43.661 15:24:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84369 00:31:43.661 15:24:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:31:43.661 15:24:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84369 00:31:43.661 15:24:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84369 ']' 00:31:43.661 15:24:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:43.661 15:24:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:43.661 15:24:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:43.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:43.661 15:24:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:43.661 15:24:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:43.661 15:24:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:43.661 [2024-11-20 15:24:44.349480] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:31:43.661 [2024-11-20 15:24:44.349650] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84369 ] 00:31:43.661 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 84150 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:31:43.920 [2024-11-20 15:24:44.539025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:43.920 [2024-11-20 15:24:44.677445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:45.300 [2024-11-20 15:24:45.775503] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:45.300 [2024-11-20 15:24:45.775587] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:45.300 [2024-11-20 15:24:45.924357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.300 [2024-11-20 15:24:45.924423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:31:45.300 [2024-11-20 15:24:45.924442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:31:45.300 [2024-11-20 15:24:45.924454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.300 [2024-11-20 15:24:45.924534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.300 [2024-11-20 15:24:45.924549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:45.300 [2024-11-20 15:24:45.924561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.058 ms 00:31:45.300 [2024-11-20 15:24:45.924571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.300 [2024-11-20 15:24:45.924597] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:31:45.300 [2024-11-20 15:24:45.925616] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:31:45.300 [2024-11-20 15:24:45.925643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.300 [2024-11-20 15:24:45.925655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:45.300 [2024-11-20 15:24:45.925667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.052 ms 00:31:45.300 [2024-11-20 15:24:45.925678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.300 [2024-11-20 15:24:45.926131] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:31:45.300 [2024-11-20 15:24:45.952655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.300 [2024-11-20 15:24:45.952715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:31:45.300 [2024-11-20 15:24:45.952745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.565 ms 00:31:45.300 [2024-11-20 15:24:45.952758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.300 [2024-11-20 15:24:45.967754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.300 [2024-11-20 15:24:45.967802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:31:45.300 [2024-11-20 15:24:45.967823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:31:45.300 [2024-11-20 15:24:45.967834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.300 [2024-11-20 15:24:45.968398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.300 [2024-11-20 15:24:45.968414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:45.300 [2024-11-20 15:24:45.968427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.446 ms 00:31:45.300 [2024-11-20 15:24:45.968437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.300 [2024-11-20 15:24:45.968510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.300 [2024-11-20 15:24:45.968531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:45.300 [2024-11-20 15:24:45.968542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:31:45.300 [2024-11-20 15:24:45.968553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.300 [2024-11-20 15:24:45.968590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.300 [2024-11-20 15:24:45.968602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:31:45.300 [2024-11-20 15:24:45.968614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:31:45.300 [2024-11-20 15:24:45.968625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.300 [2024-11-20 15:24:45.968658] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:31:45.300 [2024-11-20 15:24:45.973245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.300 [2024-11-20 15:24:45.973279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:45.300 [2024-11-20 15:24:45.973292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.603 ms 00:31:45.300 [2024-11-20 15:24:45.973303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.300 [2024-11-20 15:24:45.973343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.300 [2024-11-20 15:24:45.973353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:31:45.300 [2024-11-20 15:24:45.973365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:31:45.300 [2024-11-20 15:24:45.973375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.300 [2024-11-20 15:24:45.973420] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:31:45.300 [2024-11-20 15:24:45.973448] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:31:45.300 [2024-11-20 15:24:45.973488] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:31:45.300 [2024-11-20 15:24:45.973510] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:31:45.300 [2024-11-20 15:24:45.973615] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:31:45.300 [2024-11-20 15:24:45.973630] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:31:45.300 [2024-11-20 15:24:45.973645] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:31:45.300 [2024-11-20 15:24:45.973660] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:31:45.300 [2024-11-20 15:24:45.973673] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:31:45.300 [2024-11-20 15:24:45.973685] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:31:45.300 [2024-11-20 15:24:45.973696] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:31:45.300 [2024-11-20 15:24:45.973707] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:31:45.300 [2024-11-20 15:24:45.973732] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:31:45.300 [2024-11-20 15:24:45.973744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.300 [2024-11-20 15:24:45.973759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:31:45.300 [2024-11-20 15:24:45.973769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.327 ms 00:31:45.300 [2024-11-20 15:24:45.973780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.300 [2024-11-20 15:24:45.973856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.300 [2024-11-20 15:24:45.973867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:31:45.300 [2024-11-20 15:24:45.973879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:31:45.300 [2024-11-20 15:24:45.973889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.300 [2024-11-20 15:24:45.973988] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:31:45.300 [2024-11-20 15:24:45.974000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:31:45.300 [2024-11-20 15:24:45.974016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:45.300 [2024-11-20 15:24:45.974028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:45.300 [2024-11-20 15:24:45.974038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:31:45.300 [2024-11-20 15:24:45.974048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:31:45.300 [2024-11-20 15:24:45.974058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:31:45.300 [2024-11-20 15:24:45.974067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:31:45.300 [2024-11-20 15:24:45.974078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:31:45.300 [2024-11-20 15:24:45.974090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:45.300 [2024-11-20 15:24:45.974100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:31:45.300 [2024-11-20 15:24:45.974110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:31:45.300 [2024-11-20 15:24:45.974120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:45.300 [2024-11-20 15:24:45.974130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:31:45.300 [2024-11-20 15:24:45.974139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:31:45.300 [2024-11-20 15:24:45.974149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:45.300 [2024-11-20 15:24:45.974158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:31:45.300 [2024-11-20 15:24:45.974168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:31:45.300 [2024-11-20 15:24:45.974177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:45.300 [2024-11-20 15:24:45.974187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:31:45.301 [2024-11-20 15:24:45.974197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:31:45.301 [2024-11-20 15:24:45.974206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:45.301 [2024-11-20 15:24:45.974215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:31:45.301 [2024-11-20 15:24:45.974237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:31:45.301 [2024-11-20 15:24:45.974247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:45.301 [2024-11-20 15:24:45.974256] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:31:45.301 [2024-11-20 15:24:45.974265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:31:45.301 [2024-11-20 15:24:45.974274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:45.301 [2024-11-20 15:24:45.974284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:31:45.301 [2024-11-20 15:24:45.974294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:31:45.301 [2024-11-20 15:24:45.974303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:45.301 [2024-11-20 15:24:45.974313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:31:45.301 [2024-11-20 15:24:45.974323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:31:45.301 [2024-11-20 15:24:45.974333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:45.301 [2024-11-20 15:24:45.974342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:31:45.301 [2024-11-20 15:24:45.974351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:31:45.301 [2024-11-20 15:24:45.974361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:45.301 [2024-11-20 15:24:45.974370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:31:45.301 [2024-11-20 15:24:45.974379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:31:45.301 [2024-11-20 15:24:45.974388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:45.301 [2024-11-20 15:24:45.974398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:31:45.301 [2024-11-20 15:24:45.974408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:31:45.301 [2024-11-20 15:24:45.974417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:45.301 [2024-11-20 15:24:45.974426] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:31:45.301 [2024-11-20 15:24:45.974438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:31:45.301 [2024-11-20 15:24:45.974448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:45.301 [2024-11-20 15:24:45.974458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:45.301 [2024-11-20 15:24:45.974469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:31:45.301 [2024-11-20 15:24:45.974479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:31:45.301 [2024-11-20 15:24:45.974489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:31:45.301 [2024-11-20 15:24:45.974498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:31:45.301 [2024-11-20 15:24:45.974508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:31:45.301 [2024-11-20 15:24:45.974518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:31:45.301 [2024-11-20 15:24:45.974529] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:31:45.301 [2024-11-20 15:24:45.974542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:45.301 [2024-11-20 15:24:45.974554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:31:45.301 [2024-11-20 15:24:45.974564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:31:45.301 [2024-11-20 15:24:45.974575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:31:45.301 [2024-11-20 15:24:45.974585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:31:45.301 [2024-11-20 15:24:45.974596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:31:45.301 [2024-11-20 15:24:45.974607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:31:45.301 [2024-11-20 15:24:45.974618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:31:45.301 [2024-11-20 15:24:45.974629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:31:45.301 [2024-11-20 15:24:45.974640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:31:45.301 [2024-11-20 15:24:45.974651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:31:45.301 [2024-11-20 15:24:45.974662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:31:45.301 [2024-11-20 15:24:45.974673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:31:45.301 [2024-11-20 15:24:45.974683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:31:45.301 [2024-11-20 15:24:45.974694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:31:45.301 [2024-11-20 15:24:45.974704] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:31:45.301 [2024-11-20 15:24:45.974726] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:45.301 [2024-11-20 15:24:45.974744] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:45.301 [2024-11-20 15:24:45.974756] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:31:45.301 [2024-11-20 15:24:45.974773] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:31:45.301 [2024-11-20 15:24:45.974785] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:31:45.301 [2024-11-20 15:24:45.974796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.301 [2024-11-20 15:24:45.974807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:31:45.301 [2024-11-20 15:24:45.974818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.865 ms 00:31:45.301 [2024-11-20 15:24:45.974829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.301 [2024-11-20 15:24:46.019911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.301 [2024-11-20 15:24:46.020109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:45.301 [2024-11-20 15:24:46.020253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 45.090 ms 00:31:45.301 [2024-11-20 15:24:46.020293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.301 [2024-11-20 15:24:46.020395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.301 [2024-11-20 15:24:46.020428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:31:45.301 [2024-11-20 15:24:46.020514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:31:45.301 [2024-11-20 15:24:46.020551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.301 [2024-11-20 15:24:46.074635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.301 [2024-11-20 15:24:46.074891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:45.301 [2024-11-20 15:24:46.075029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 54.012 ms 00:31:45.301 [2024-11-20 15:24:46.075069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.301 [2024-11-20 15:24:46.075178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.301 [2024-11-20 15:24:46.075265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:45.301 [2024-11-20 15:24:46.075304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:45.301 [2024-11-20 15:24:46.075344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.301 [2024-11-20 15:24:46.075579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.301 [2024-11-20 15:24:46.075669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:45.301 [2024-11-20 15:24:46.075761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.080 ms 00:31:45.301 [2024-11-20 15:24:46.075799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.301 [2024-11-20 15:24:46.075929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.301 [2024-11-20 15:24:46.075967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:45.301 [2024-11-20 15:24:46.076037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:31:45.301 [2024-11-20 15:24:46.076071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.301 [2024-11-20 15:24:46.101677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.301 [2024-11-20 15:24:46.101948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:45.301 [2024-11-20 15:24:46.102069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.435 ms 00:31:45.301 [2024-11-20 15:24:46.102117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.301 [2024-11-20 15:24:46.102354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.301 [2024-11-20 15:24:46.102451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:31:45.301 [2024-11-20 15:24:46.102527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:31:45.301 [2024-11-20 15:24:46.102559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.561 [2024-11-20 15:24:46.139586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.561 [2024-11-20 15:24:46.139865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:31:45.561 [2024-11-20 15:24:46.140020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.036 ms 00:31:45.561 [2024-11-20 15:24:46.140061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.561 [2024-11-20 15:24:46.156482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.561 [2024-11-20 15:24:46.156685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:31:45.561 [2024-11-20 15:24:46.156804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.678 ms 00:31:45.561 [2024-11-20 15:24:46.156822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.561 [2024-11-20 15:24:46.256187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.561 [2024-11-20 15:24:46.256271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:31:45.561 [2024-11-20 15:24:46.256298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 99.399 ms 00:31:45.561 [2024-11-20 15:24:46.256310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.561 [2024-11-20 15:24:46.256610] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:31:45.561 [2024-11-20 15:24:46.256840] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:31:45.561 [2024-11-20 15:24:46.257015] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:31:45.561 [2024-11-20 15:24:46.257182] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:31:45.561 [2024-11-20 15:24:46.257196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.561 [2024-11-20 15:24:46.257208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:31:45.561 [2024-11-20 15:24:46.257222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.786 ms 00:31:45.561 [2024-11-20 15:24:46.257232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.561 [2024-11-20 15:24:46.257359] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:31:45.561 [2024-11-20 15:24:46.257374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.561 [2024-11-20 15:24:46.257391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:31:45.561 [2024-11-20 15:24:46.257403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:31:45.561 [2024-11-20 15:24:46.257413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.561 [2024-11-20 15:24:46.281917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.561 [2024-11-20 15:24:46.281997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:31:45.561 [2024-11-20 15:24:46.282017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.489 ms 00:31:45.561 [2024-11-20 15:24:46.282029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.561 [2024-11-20 15:24:46.297408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.561 [2024-11-20 15:24:46.297473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:31:45.561 [2024-11-20 15:24:46.297490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:31:45.561 [2024-11-20 15:24:46.297502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.561 [2024-11-20 15:24:46.297653] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:31:45.561 [2024-11-20 15:24:46.297996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.561 [2024-11-20 15:24:46.298011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:31:45.561 [2024-11-20 15:24:46.298023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.344 ms 00:31:45.561 [2024-11-20 15:24:46.298034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.165 [2024-11-20 15:24:46.845389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.165 [2024-11-20 15:24:46.845480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:31:46.165 [2024-11-20 15:24:46.845501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 546.792 ms 00:31:46.165 [2024-11-20 15:24:46.845514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.165 [2024-11-20 15:24:46.851415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.165 [2024-11-20 15:24:46.851467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:31:46.165 [2024-11-20 15:24:46.851484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.153 ms 00:31:46.165 [2024-11-20 15:24:46.851496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.165 [2024-11-20 15:24:46.851931] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:31:46.165 [2024-11-20 15:24:46.851971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.165 [2024-11-20 15:24:46.851983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:31:46.165 [2024-11-20 15:24:46.851996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.445 ms 00:31:46.165 [2024-11-20 15:24:46.852007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.165 [2024-11-20 15:24:46.852040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.165 [2024-11-20 15:24:46.852054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:31:46.165 [2024-11-20 15:24:46.852065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:46.165 [2024-11-20 15:24:46.852076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.165 [2024-11-20 15:24:46.852120] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 555.370 ms, result 0 00:31:46.165 [2024-11-20 15:24:46.852169] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:31:46.165 [2024-11-20 15:24:46.852260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.165 [2024-11-20 15:24:46.852270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:31:46.165 [2024-11-20 15:24:46.852281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.093 ms 00:31:46.165 [2024-11-20 15:24:46.852290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.732 [2024-11-20 15:24:47.394013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.732 [2024-11-20 15:24:47.394104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:31:46.732 [2024-11-20 15:24:47.394124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 541.147 ms 00:31:46.732 [2024-11-20 15:24:47.394135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.732 [2024-11-20 15:24:47.400300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.732 [2024-11-20 15:24:47.400348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:31:46.732 [2024-11-20 15:24:47.400363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.394 ms 00:31:46.732 [2024-11-20 15:24:47.400374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.732 [2024-11-20 15:24:47.400839] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:31:46.732 [2024-11-20 15:24:47.400863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.732 [2024-11-20 15:24:47.400874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:31:46.732 [2024-11-20 15:24:47.400887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.458 ms 00:31:46.732 [2024-11-20 15:24:47.400897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.732 [2024-11-20 15:24:47.400930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.732 [2024-11-20 15:24:47.400943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:31:46.732 [2024-11-20 15:24:47.400955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:46.732 [2024-11-20 15:24:47.400965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.732 [2024-11-20 15:24:47.401007] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 549.727 ms, result 0 00:31:46.732 [2024-11-20 15:24:47.401055] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:46.732 [2024-11-20 15:24:47.401069] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:31:46.732 [2024-11-20 15:24:47.401083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.732 [2024-11-20 15:24:47.401095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:31:46.732 [2024-11-20 15:24:47.401106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1105.246 ms 00:31:46.732 [2024-11-20 15:24:47.401117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.732 [2024-11-20 15:24:47.401152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.732 [2024-11-20 15:24:47.401164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:31:46.732 [2024-11-20 15:24:47.401181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:46.732 [2024-11-20 15:24:47.401191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.733 [2024-11-20 15:24:47.415472] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:31:46.733 [2024-11-20 15:24:47.415823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.733 [2024-11-20 15:24:47.415875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:31:46.733 [2024-11-20 15:24:47.415963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.637 ms 00:31:46.733 [2024-11-20 15:24:47.416001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.733 [2024-11-20 15:24:47.416665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.733 [2024-11-20 15:24:47.416803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:31:46.733 [2024-11-20 15:24:47.416900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.526 ms 00:31:46.733 [2024-11-20 15:24:47.416937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.733 [2024-11-20 15:24:47.419009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.733 [2024-11-20 15:24:47.419127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:31:46.733 [2024-11-20 15:24:47.419270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.026 ms 00:31:46.733 [2024-11-20 15:24:47.419307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.733 [2024-11-20 15:24:47.419400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.733 [2024-11-20 15:24:47.419442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:31:46.733 [2024-11-20 15:24:47.419531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:31:46.733 [2024-11-20 15:24:47.419573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.733 [2024-11-20 15:24:47.419736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.733 [2024-11-20 15:24:47.419899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:31:46.733 [2024-11-20 15:24:47.419979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:31:46.733 [2024-11-20 15:24:47.420015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.733 [2024-11-20 15:24:47.420066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.733 [2024-11-20 15:24:47.420099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:31:46.733 [2024-11-20 15:24:47.420131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:31:46.733 [2024-11-20 15:24:47.420161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.733 [2024-11-20 15:24:47.420306] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:31:46.733 [2024-11-20 15:24:47.420349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.733 [2024-11-20 15:24:47.420380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:31:46.733 [2024-11-20 15:24:47.420412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:31:46.733 [2024-11-20 15:24:47.420443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.733 [2024-11-20 15:24:47.420528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.733 [2024-11-20 15:24:47.420543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:31:46.733 [2024-11-20 15:24:47.420554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:31:46.733 [2024-11-20 15:24:47.420565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.733 [2024-11-20 15:24:47.421830] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1499.346 ms, result 0 00:31:46.733 [2024-11-20 15:24:47.436320] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:46.733 [2024-11-20 15:24:47.452277] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:31:46.733 [2024-11-20 15:24:47.463214] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:46.733 Validate MD5 checksum, iteration 1 00:31:46.733 15:24:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:46.733 15:24:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:31:46.733 15:24:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:46.733 15:24:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:31:46.733 15:24:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:31:46.733 15:24:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:31:46.733 15:24:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:31:46.733 15:24:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:46.733 15:24:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:31:46.733 15:24:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:46.733 15:24:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:46.733 15:24:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:46.733 15:24:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:46.733 15:24:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:46.733 15:24:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:46.992 [2024-11-20 15:24:47.604869] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:31:46.992 [2024-11-20 15:24:47.605194] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84409 ] 00:31:46.992 [2024-11-20 15:24:47.793354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:47.250 [2024-11-20 15:24:47.944979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:49.152  [2024-11-20T15:24:50.564Z] Copying: 678/1024 [MB] (678 MBps) [2024-11-20T15:24:53.103Z] Copying: 1024/1024 [MB] (average 659 MBps) 00:31:52.267 00:31:52.267 15:24:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:31:52.267 15:24:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:54.171 15:24:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:54.171 Validate MD5 checksum, iteration 2 00:31:54.171 15:24:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=f9cadea94c78487220245e1a0e2d9df0 00:31:54.171 15:24:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ f9cadea94c78487220245e1a0e2d9df0 != \f\9\c\a\d\e\a\9\4\c\7\8\4\8\7\2\2\0\2\4\5\e\1\a\0\e\2\d\9\d\f\0 ]] 00:31:54.171 15:24:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:54.171 15:24:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:54.171 15:24:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:31:54.171 15:24:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:54.171 15:24:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:54.171 15:24:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:54.171 15:24:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:54.171 15:24:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:54.172 15:24:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:54.172 [2024-11-20 15:24:54.904890] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:31:54.172 [2024-11-20 15:24:54.905378] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84489 ] 00:31:54.431 [2024-11-20 15:24:55.093864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:54.431 [2024-11-20 15:24:55.243706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:56.340  [2024-11-20T15:24:57.744Z] Copying: 663/1024 [MB] (663 MBps) [2024-11-20T15:24:59.126Z] Copying: 1024/1024 [MB] (average 644 MBps) 00:31:58.290 00:31:58.290 15:24:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:31:58.290 15:24:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:00.202 15:25:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:32:00.202 15:25:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=f9025fedb70a4ae76087bcbe1093300a 00:32:00.202 15:25:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ f9025fedb70a4ae76087bcbe1093300a != \f\9\0\2\5\f\e\d\b\7\0\a\4\a\e\7\6\0\8\7\b\c\b\e\1\0\9\3\3\0\0\a ]] 00:32:00.202 15:25:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:32:00.203 15:25:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:00.203 15:25:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:32:00.203 15:25:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:32:00.203 15:25:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:32:00.203 15:25:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:00.462 15:25:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:32:00.462 15:25:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:32:00.462 15:25:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:32:00.462 15:25:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:32:00.462 15:25:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 84369 ]] 00:32:00.462 15:25:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 84369 00:32:00.462 15:25:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84369 ']' 00:32:00.462 15:25:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84369 00:32:00.462 15:25:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:32:00.462 15:25:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:00.462 15:25:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84369 00:32:00.462 killing process with pid 84369 00:32:00.462 15:25:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:00.462 15:25:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:00.462 15:25:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84369' 00:32:00.462 15:25:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84369 00:32:00.462 15:25:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84369 00:32:01.843 [2024-11-20 15:25:02.333116] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:32:01.843 [2024-11-20 15:25:02.354252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.843 [2024-11-20 15:25:02.354306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:32:01.843 [2024-11-20 15:25:02.354326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:32:01.843 [2024-11-20 15:25:02.354337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.843 [2024-11-20 15:25:02.354365] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:32:01.843 [2024-11-20 15:25:02.359072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.843 [2024-11-20 15:25:02.359108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:32:01.843 [2024-11-20 15:25:02.359128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.696 ms 00:32:01.843 [2024-11-20 15:25:02.359139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.843 [2024-11-20 15:25:02.359368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.843 [2024-11-20 15:25:02.359382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:32:01.843 [2024-11-20 15:25:02.359394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.198 ms 00:32:01.843 [2024-11-20 15:25:02.359405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.843 [2024-11-20 15:25:02.360640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.843 [2024-11-20 15:25:02.360676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:32:01.843 [2024-11-20 15:25:02.360689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.212 ms 00:32:01.843 [2024-11-20 15:25:02.360700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.843 [2024-11-20 15:25:02.361735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.843 [2024-11-20 15:25:02.361770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:32:01.843 [2024-11-20 15:25:02.361783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.976 ms 00:32:01.843 [2024-11-20 15:25:02.361795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.843 [2024-11-20 15:25:02.377030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.843 [2024-11-20 15:25:02.377081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:32:01.843 [2024-11-20 15:25:02.377097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.188 ms 00:32:01.843 [2024-11-20 15:25:02.377114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.843 [2024-11-20 15:25:02.385144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.843 [2024-11-20 15:25:02.385180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:32:01.843 [2024-11-20 15:25:02.385195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.002 ms 00:32:01.843 [2024-11-20 15:25:02.385206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.843 [2024-11-20 15:25:02.385316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.843 [2024-11-20 15:25:02.385331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:32:01.843 [2024-11-20 15:25:02.385344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.068 ms 00:32:01.843 [2024-11-20 15:25:02.385355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.843 [2024-11-20 15:25:02.399907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.843 [2024-11-20 15:25:02.400056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:32:01.843 [2024-11-20 15:25:02.400079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.549 ms 00:32:01.843 [2024-11-20 15:25:02.400089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.843 [2024-11-20 15:25:02.415066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.843 [2024-11-20 15:25:02.415207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:32:01.843 [2024-11-20 15:25:02.415228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.961 ms 00:32:01.843 [2024-11-20 15:25:02.415239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.843 [2024-11-20 15:25:02.429449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.843 [2024-11-20 15:25:02.429484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:32:01.843 [2024-11-20 15:25:02.429498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.195 ms 00:32:01.843 [2024-11-20 15:25:02.429508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.843 [2024-11-20 15:25:02.443741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.843 [2024-11-20 15:25:02.443775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:32:01.843 [2024-11-20 15:25:02.443788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.156 ms 00:32:01.843 [2024-11-20 15:25:02.443798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.843 [2024-11-20 15:25:02.443836] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:32:01.843 [2024-11-20 15:25:02.443855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:32:01.843 [2024-11-20 15:25:02.443869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:32:01.843 [2024-11-20 15:25:02.443880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:32:01.843 [2024-11-20 15:25:02.443892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:01.843 [2024-11-20 15:25:02.443903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:01.843 [2024-11-20 15:25:02.443914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:01.843 [2024-11-20 15:25:02.443925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:01.843 [2024-11-20 15:25:02.443937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:01.843 [2024-11-20 15:25:02.443948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:01.843 [2024-11-20 15:25:02.443959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:01.843 [2024-11-20 15:25:02.443970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:01.843 [2024-11-20 15:25:02.443981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:01.843 [2024-11-20 15:25:02.443992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:01.843 [2024-11-20 15:25:02.444002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:01.843 [2024-11-20 15:25:02.444012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:01.844 [2024-11-20 15:25:02.444024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:01.844 [2024-11-20 15:25:02.444035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:01.844 [2024-11-20 15:25:02.444046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:01.844 [2024-11-20 15:25:02.444059] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:32:01.844 [2024-11-20 15:25:02.444070] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: b41d7cb8-ec9f-4951-83df-c2fa2ffb9b57 00:32:01.844 [2024-11-20 15:25:02.444080] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:32:01.844 [2024-11-20 15:25:02.444097] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:32:01.844 [2024-11-20 15:25:02.444108] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:32:01.844 [2024-11-20 15:25:02.444119] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:32:01.844 [2024-11-20 15:25:02.444129] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:32:01.844 [2024-11-20 15:25:02.444141] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:32:01.844 [2024-11-20 15:25:02.444152] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:32:01.844 [2024-11-20 15:25:02.444162] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:32:01.844 [2024-11-20 15:25:02.444181] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:32:01.844 [2024-11-20 15:25:02.444193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.844 [2024-11-20 15:25:02.444210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:32:01.844 [2024-11-20 15:25:02.444222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.359 ms 00:32:01.844 [2024-11-20 15:25:02.444233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.844 [2024-11-20 15:25:02.466446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.844 [2024-11-20 15:25:02.466584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:32:01.844 [2024-11-20 15:25:02.466659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.227 ms 00:32:01.844 [2024-11-20 15:25:02.466694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.844 [2024-11-20 15:25:02.467377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.844 [2024-11-20 15:25:02.467480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:32:01.844 [2024-11-20 15:25:02.467559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.598 ms 00:32:01.844 [2024-11-20 15:25:02.467596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.844 [2024-11-20 15:25:02.538175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:01.844 [2024-11-20 15:25:02.538388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:32:01.844 [2024-11-20 15:25:02.538518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:01.844 [2024-11-20 15:25:02.538569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.844 [2024-11-20 15:25:02.538655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:01.844 [2024-11-20 15:25:02.538687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:32:01.844 [2024-11-20 15:25:02.538735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:01.844 [2024-11-20 15:25:02.538769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.844 [2024-11-20 15:25:02.538993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:01.844 [2024-11-20 15:25:02.539040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:32:01.844 [2024-11-20 15:25:02.539128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:01.844 [2024-11-20 15:25:02.539163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.844 [2024-11-20 15:25:02.539212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:01.844 [2024-11-20 15:25:02.539290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:32:01.844 [2024-11-20 15:25:02.539326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:01.844 [2024-11-20 15:25:02.539463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.844 [2024-11-20 15:25:02.674776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:01.844 [2024-11-20 15:25:02.674991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:32:01.844 [2024-11-20 15:25:02.675077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:01.844 [2024-11-20 15:25:02.675115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.103 [2024-11-20 15:25:02.782478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:02.103 [2024-11-20 15:25:02.782678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:32:02.103 [2024-11-20 15:25:02.782790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:02.103 [2024-11-20 15:25:02.782829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.103 [2024-11-20 15:25:02.782995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:02.104 [2024-11-20 15:25:02.783144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:32:02.104 [2024-11-20 15:25:02.783229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:02.104 [2024-11-20 15:25:02.783242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.104 [2024-11-20 15:25:02.783319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:02.104 [2024-11-20 15:25:02.783334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:32:02.104 [2024-11-20 15:25:02.783354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:02.104 [2024-11-20 15:25:02.783377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.104 [2024-11-20 15:25:02.783511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:02.104 [2024-11-20 15:25:02.783526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:32:02.104 [2024-11-20 15:25:02.783539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:02.104 [2024-11-20 15:25:02.783550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.104 [2024-11-20 15:25:02.783598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:02.104 [2024-11-20 15:25:02.783611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:32:02.104 [2024-11-20 15:25:02.783623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:02.104 [2024-11-20 15:25:02.783639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.104 [2024-11-20 15:25:02.783684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:02.104 [2024-11-20 15:25:02.783697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:32:02.104 [2024-11-20 15:25:02.783707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:02.104 [2024-11-20 15:25:02.783730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.104 [2024-11-20 15:25:02.783783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:02.104 [2024-11-20 15:25:02.783795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:32:02.104 [2024-11-20 15:25:02.783811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:02.104 [2024-11-20 15:25:02.783821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.104 [2024-11-20 15:25:02.783971] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 430.370 ms, result 0 00:32:03.481 15:25:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:32:03.481 15:25:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:03.481 15:25:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:32:03.481 15:25:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:32:03.481 15:25:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:32:03.481 15:25:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:03.481 15:25:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:32:03.481 15:25:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:03.481 Remove shared memory files 00:32:03.481 15:25:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:32:03.481 15:25:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:32:03.481 15:25:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid84150 00:32:03.481 15:25:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:03.481 15:25:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:32:03.481 ************************************ 00:32:03.481 END TEST ftl_upgrade_shutdown 00:32:03.481 ************************************ 00:32:03.481 00:32:03.481 real 1m30.960s 00:32:03.482 user 2m5.866s 00:32:03.482 sys 0m25.193s 00:32:03.482 15:25:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:03.482 15:25:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:03.482 Process with pid 77004 is not found 00:32:03.482 15:25:04 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:32:03.482 15:25:04 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:32:03.482 15:25:04 ftl -- ftl/ftl.sh@14 -- # killprocess 77004 00:32:03.482 15:25:04 ftl -- common/autotest_common.sh@954 -- # '[' -z 77004 ']' 00:32:03.482 15:25:04 ftl -- common/autotest_common.sh@958 -- # kill -0 77004 00:32:03.482 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77004) - No such process 00:32:03.482 15:25:04 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 77004 is not found' 00:32:03.482 15:25:04 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:32:03.482 15:25:04 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=84618 00:32:03.482 15:25:04 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:03.482 15:25:04 ftl -- ftl/ftl.sh@20 -- # waitforlisten 84618 00:32:03.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:03.482 15:25:04 ftl -- common/autotest_common.sh@835 -- # '[' -z 84618 ']' 00:32:03.482 15:25:04 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:03.482 15:25:04 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:03.482 15:25:04 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:03.482 15:25:04 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:03.482 15:25:04 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:03.740 [2024-11-20 15:25:04.419064] Starting SPDK v25.01-pre git sha1 c1691a126 / DPDK 24.03.0 initialization... 00:32:03.740 [2024-11-20 15:25:04.419439] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84618 ] 00:32:03.999 [2024-11-20 15:25:04.605412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:03.999 [2024-11-20 15:25:04.753185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:05.378 15:25:05 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:05.378 15:25:05 ftl -- common/autotest_common.sh@868 -- # return 0 00:32:05.378 15:25:05 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:32:05.378 nvme0n1 00:32:05.378 15:25:06 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:32:05.378 15:25:06 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:05.378 15:25:06 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:32:05.638 15:25:06 ftl -- ftl/common.sh@28 -- # stores=f011d099-4fc5-468c-a01e-93b727a94600 00:32:05.638 15:25:06 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:32:05.638 15:25:06 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f011d099-4fc5-468c-a01e-93b727a94600 00:32:05.897 15:25:06 ftl -- ftl/ftl.sh@23 -- # killprocess 84618 00:32:05.897 15:25:06 ftl -- common/autotest_common.sh@954 -- # '[' -z 84618 ']' 00:32:05.897 15:25:06 ftl -- common/autotest_common.sh@958 -- # kill -0 84618 00:32:05.897 15:25:06 ftl -- common/autotest_common.sh@959 -- # uname 00:32:05.897 15:25:06 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:05.897 15:25:06 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84618 00:32:05.897 killing process with pid 84618 00:32:05.897 15:25:06 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:05.897 15:25:06 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:05.897 15:25:06 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84618' 00:32:05.897 15:25:06 ftl -- common/autotest_common.sh@973 -- # kill 84618 00:32:05.897 15:25:06 ftl -- common/autotest_common.sh@978 -- # wait 84618 00:32:08.431 15:25:09 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:32:08.689 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:08.948 Waiting for block devices as requested 00:32:08.948 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:32:09.206 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:32:09.206 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:32:09.206 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:32:14.536 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:32:14.536 Remove shared memory files 00:32:14.536 15:25:15 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:32:14.536 15:25:15 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:14.536 15:25:15 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:32:14.536 15:25:15 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:32:14.536 15:25:15 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:32:14.536 15:25:15 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:14.536 15:25:15 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:32:14.536 ************************************ 00:32:14.536 END TEST ftl 00:32:14.536 ************************************ 00:32:14.536 00:32:14.536 real 11m27.016s 00:32:14.536 user 14m3.997s 00:32:14.536 sys 1m40.064s 00:32:14.536 15:25:15 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:14.536 15:25:15 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:14.536 15:25:15 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:32:14.536 15:25:15 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:32:14.536 15:25:15 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:32:14.536 15:25:15 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:32:14.536 15:25:15 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:32:14.536 15:25:15 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:32:14.536 15:25:15 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:32:14.536 15:25:15 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:32:14.536 15:25:15 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:32:14.536 15:25:15 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:32:14.536 15:25:15 -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:14.536 15:25:15 -- common/autotest_common.sh@10 -- # set +x 00:32:14.536 15:25:15 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:32:14.536 15:25:15 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:32:14.536 15:25:15 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:32:14.536 15:25:15 -- common/autotest_common.sh@10 -- # set +x 00:32:16.439 INFO: APP EXITING 00:32:16.439 INFO: killing all VMs 00:32:16.439 INFO: killing vhost app 00:32:16.439 INFO: EXIT DONE 00:32:17.008 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:17.578 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:32:17.578 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:32:17.578 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:32:17.578 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:32:18.147 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:18.407 Cleaning 00:32:18.407 Removing: /var/run/dpdk/spdk0/config 00:32:18.407 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:18.407 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:18.407 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:18.407 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:18.407 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:18.407 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:18.407 Removing: /var/run/dpdk/spdk0 00:32:18.407 Removing: /var/run/dpdk/spdk_pid57501 00:32:18.407 Removing: /var/run/dpdk/spdk_pid57747 00:32:18.407 Removing: /var/run/dpdk/spdk_pid57982 00:32:18.407 Removing: /var/run/dpdk/spdk_pid58097 00:32:18.407 Removing: /var/run/dpdk/spdk_pid58153 00:32:18.407 Removing: /var/run/dpdk/spdk_pid58292 00:32:18.407 Removing: /var/run/dpdk/spdk_pid58316 00:32:18.668 Removing: /var/run/dpdk/spdk_pid58531 00:32:18.668 Removing: /var/run/dpdk/spdk_pid58649 00:32:18.668 Removing: /var/run/dpdk/spdk_pid58767 00:32:18.668 Removing: /var/run/dpdk/spdk_pid58895 00:32:18.668 Removing: /var/run/dpdk/spdk_pid59014 00:32:18.668 Removing: /var/run/dpdk/spdk_pid59059 00:32:18.668 Removing: /var/run/dpdk/spdk_pid59090 00:32:18.668 Removing: /var/run/dpdk/spdk_pid59166 00:32:18.668 Removing: /var/run/dpdk/spdk_pid59294 00:32:18.668 Removing: /var/run/dpdk/spdk_pid59754 00:32:18.668 Removing: /var/run/dpdk/spdk_pid59836 00:32:18.668 Removing: /var/run/dpdk/spdk_pid59923 00:32:18.668 Removing: /var/run/dpdk/spdk_pid59939 00:32:18.668 Removing: /var/run/dpdk/spdk_pid60099 00:32:18.668 Removing: /var/run/dpdk/spdk_pid60121 00:32:18.668 Removing: /var/run/dpdk/spdk_pid60280 00:32:18.668 Removing: /var/run/dpdk/spdk_pid60307 00:32:18.668 Removing: /var/run/dpdk/spdk_pid60376 00:32:18.668 Removing: /var/run/dpdk/spdk_pid60400 00:32:18.668 Removing: /var/run/dpdk/spdk_pid60467 00:32:18.668 Removing: /var/run/dpdk/spdk_pid60496 00:32:18.668 Removing: /var/run/dpdk/spdk_pid60691 00:32:18.668 Removing: /var/run/dpdk/spdk_pid60733 00:32:18.668 Removing: /var/run/dpdk/spdk_pid60822 00:32:18.668 Removing: /var/run/dpdk/spdk_pid61016 00:32:18.668 Removing: /var/run/dpdk/spdk_pid61117 00:32:18.668 Removing: /var/run/dpdk/spdk_pid61159 00:32:18.668 Removing: /var/run/dpdk/spdk_pid61623 00:32:18.668 Removing: /var/run/dpdk/spdk_pid61727 00:32:18.668 Removing: /var/run/dpdk/spdk_pid61847 00:32:18.668 Removing: /var/run/dpdk/spdk_pid61900 00:32:18.668 Removing: /var/run/dpdk/spdk_pid61931 00:32:18.668 Removing: /var/run/dpdk/spdk_pid62015 00:32:18.668 Removing: /var/run/dpdk/spdk_pid62667 00:32:18.668 Removing: /var/run/dpdk/spdk_pid62709 00:32:18.668 Removing: /var/run/dpdk/spdk_pid63223 00:32:18.668 Removing: /var/run/dpdk/spdk_pid63329 00:32:18.668 Removing: /var/run/dpdk/spdk_pid63449 00:32:18.668 Removing: /var/run/dpdk/spdk_pid63509 00:32:18.668 Removing: /var/run/dpdk/spdk_pid63540 00:32:18.668 Removing: /var/run/dpdk/spdk_pid63567 00:32:18.668 Removing: /var/run/dpdk/spdk_pid65469 00:32:18.668 Removing: /var/run/dpdk/spdk_pid65623 00:32:18.668 Removing: /var/run/dpdk/spdk_pid65627 00:32:18.668 Removing: /var/run/dpdk/spdk_pid65644 00:32:18.668 Removing: /var/run/dpdk/spdk_pid65696 00:32:18.668 Removing: /var/run/dpdk/spdk_pid65700 00:32:18.668 Removing: /var/run/dpdk/spdk_pid65712 00:32:18.668 Removing: /var/run/dpdk/spdk_pid65762 00:32:18.668 Removing: /var/run/dpdk/spdk_pid65766 00:32:18.668 Removing: /var/run/dpdk/spdk_pid65778 00:32:18.668 Removing: /var/run/dpdk/spdk_pid65823 00:32:18.668 Removing: /var/run/dpdk/spdk_pid65827 00:32:18.668 Removing: /var/run/dpdk/spdk_pid65839 00:32:18.668 Removing: /var/run/dpdk/spdk_pid67265 00:32:18.668 Removing: /var/run/dpdk/spdk_pid67384 00:32:18.668 Removing: /var/run/dpdk/spdk_pid68823 00:32:18.668 Removing: /var/run/dpdk/spdk_pid70562 00:32:18.668 Removing: /var/run/dpdk/spdk_pid70647 00:32:18.668 Removing: /var/run/dpdk/spdk_pid70728 00:32:18.928 Removing: /var/run/dpdk/spdk_pid70843 00:32:18.928 Removing: /var/run/dpdk/spdk_pid70940 00:32:18.928 Removing: /var/run/dpdk/spdk_pid71042 00:32:18.928 Removing: /var/run/dpdk/spdk_pid71126 00:32:18.928 Removing: /var/run/dpdk/spdk_pid71203 00:32:18.928 Removing: /var/run/dpdk/spdk_pid71315 00:32:18.928 Removing: /var/run/dpdk/spdk_pid71416 00:32:18.928 Removing: /var/run/dpdk/spdk_pid71517 00:32:18.928 Removing: /var/run/dpdk/spdk_pid71602 00:32:18.928 Removing: /var/run/dpdk/spdk_pid71683 00:32:18.928 Removing: /var/run/dpdk/spdk_pid71803 00:32:18.928 Removing: /var/run/dpdk/spdk_pid71906 00:32:18.928 Removing: /var/run/dpdk/spdk_pid72023 00:32:18.928 Removing: /var/run/dpdk/spdk_pid72120 00:32:18.928 Removing: /var/run/dpdk/spdk_pid72201 00:32:18.928 Removing: /var/run/dpdk/spdk_pid72311 00:32:18.928 Removing: /var/run/dpdk/spdk_pid72417 00:32:18.928 Removing: /var/run/dpdk/spdk_pid72515 00:32:18.928 Removing: /var/run/dpdk/spdk_pid72600 00:32:18.928 Removing: /var/run/dpdk/spdk_pid72686 00:32:18.928 Removing: /var/run/dpdk/spdk_pid72767 00:32:18.928 Removing: /var/run/dpdk/spdk_pid72847 00:32:18.928 Removing: /var/run/dpdk/spdk_pid72956 00:32:18.928 Removing: /var/run/dpdk/spdk_pid73052 00:32:18.928 Removing: /var/run/dpdk/spdk_pid73153 00:32:18.928 Removing: /var/run/dpdk/spdk_pid73238 00:32:18.928 Removing: /var/run/dpdk/spdk_pid73318 00:32:18.928 Removing: /var/run/dpdk/spdk_pid73403 00:32:18.928 Removing: /var/run/dpdk/spdk_pid73483 00:32:18.928 Removing: /var/run/dpdk/spdk_pid73592 00:32:18.928 Removing: /var/run/dpdk/spdk_pid73694 00:32:18.928 Removing: /var/run/dpdk/spdk_pid73842 00:32:18.928 Removing: /var/run/dpdk/spdk_pid74144 00:32:18.928 Removing: /var/run/dpdk/spdk_pid74186 00:32:18.928 Removing: /var/run/dpdk/spdk_pid74653 00:32:18.928 Removing: /var/run/dpdk/spdk_pid74839 00:32:18.928 Removing: /var/run/dpdk/spdk_pid74943 00:32:18.928 Removing: /var/run/dpdk/spdk_pid75053 00:32:18.928 Removing: /var/run/dpdk/spdk_pid75112 00:32:18.928 Removing: /var/run/dpdk/spdk_pid75143 00:32:18.928 Removing: /var/run/dpdk/spdk_pid75444 00:32:18.928 Removing: /var/run/dpdk/spdk_pid75515 00:32:18.928 Removing: /var/run/dpdk/spdk_pid75612 00:32:18.928 Removing: /var/run/dpdk/spdk_pid76044 00:32:18.928 Removing: /var/run/dpdk/spdk_pid76193 00:32:18.928 Removing: /var/run/dpdk/spdk_pid77004 00:32:18.928 Removing: /var/run/dpdk/spdk_pid77154 00:32:18.928 Removing: /var/run/dpdk/spdk_pid77380 00:32:18.928 Removing: /var/run/dpdk/spdk_pid77484 00:32:18.928 Removing: /var/run/dpdk/spdk_pid77829 00:32:18.928 Removing: /var/run/dpdk/spdk_pid78101 00:32:18.928 Removing: /var/run/dpdk/spdk_pid78461 00:32:18.928 Removing: /var/run/dpdk/spdk_pid78677 00:32:18.928 Removing: /var/run/dpdk/spdk_pid78811 00:32:18.928 Removing: /var/run/dpdk/spdk_pid78886 00:32:18.928 Removing: /var/run/dpdk/spdk_pid79018 00:32:18.928 Removing: /var/run/dpdk/spdk_pid79059 00:32:18.928 Removing: /var/run/dpdk/spdk_pid79124 00:32:18.928 Removing: /var/run/dpdk/spdk_pid79337 00:32:18.928 Removing: /var/run/dpdk/spdk_pid79579 00:32:18.928 Removing: /var/run/dpdk/spdk_pid79992 00:32:18.928 Removing: /var/run/dpdk/spdk_pid80413 00:32:19.187 Removing: /var/run/dpdk/spdk_pid80827 00:32:19.187 Removing: /var/run/dpdk/spdk_pid81303 00:32:19.187 Removing: /var/run/dpdk/spdk_pid81457 00:32:19.187 Removing: /var/run/dpdk/spdk_pid81551 00:32:19.187 Removing: /var/run/dpdk/spdk_pid82177 00:32:19.187 Removing: /var/run/dpdk/spdk_pid82253 00:32:19.187 Removing: /var/run/dpdk/spdk_pid82701 00:32:19.187 Removing: /var/run/dpdk/spdk_pid83066 00:32:19.187 Removing: /var/run/dpdk/spdk_pid83566 00:32:19.187 Removing: /var/run/dpdk/spdk_pid83694 00:32:19.187 Removing: /var/run/dpdk/spdk_pid83749 00:32:19.187 Removing: /var/run/dpdk/spdk_pid83822 00:32:19.187 Removing: /var/run/dpdk/spdk_pid83879 00:32:19.187 Removing: /var/run/dpdk/spdk_pid83949 00:32:19.187 Removing: /var/run/dpdk/spdk_pid84150 00:32:19.187 Removing: /var/run/dpdk/spdk_pid84236 00:32:19.187 Removing: /var/run/dpdk/spdk_pid84303 00:32:19.187 Removing: /var/run/dpdk/spdk_pid84369 00:32:19.187 Removing: /var/run/dpdk/spdk_pid84409 00:32:19.187 Removing: /var/run/dpdk/spdk_pid84489 00:32:19.187 Removing: /var/run/dpdk/spdk_pid84618 00:32:19.187 Clean 00:32:19.187 15:25:19 -- common/autotest_common.sh@1453 -- # return 0 00:32:19.187 15:25:19 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:32:19.187 15:25:19 -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:19.187 15:25:19 -- common/autotest_common.sh@10 -- # set +x 00:32:19.187 15:25:19 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:32:19.187 15:25:19 -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:19.187 15:25:19 -- common/autotest_common.sh@10 -- # set +x 00:32:19.447 15:25:20 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:19.447 15:25:20 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:32:19.447 15:25:20 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:32:19.447 15:25:20 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:32:19.447 15:25:20 -- spdk/autotest.sh@398 -- # hostname 00:32:19.447 15:25:20 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:32:19.447 geninfo: WARNING: invalid characters removed from testname! 00:32:45.995 15:25:46 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:50.180 15:25:50 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:52.096 15:25:52 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:54.004 15:25:54 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:56.539 15:25:57 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:59.075 15:25:59 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:00.981 15:26:01 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:00.981 15:26:01 -- spdk/autorun.sh@1 -- $ timing_finish 00:33:00.981 15:26:01 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:33:00.981 15:26:01 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:00.981 15:26:01 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:33:00.981 15:26:01 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:33:00.981 + [[ -n 5238 ]] 00:33:00.981 + sudo kill 5238 00:33:00.991 [Pipeline] } 00:33:01.006 [Pipeline] // timeout 00:33:01.012 [Pipeline] } 00:33:01.026 [Pipeline] // stage 00:33:01.032 [Pipeline] } 00:33:01.046 [Pipeline] // catchError 00:33:01.055 [Pipeline] stage 00:33:01.058 [Pipeline] { (Stop VM) 00:33:01.070 [Pipeline] sh 00:33:01.353 + vagrant halt 00:33:04.644 ==> default: Halting domain... 00:33:11.224 [Pipeline] sh 00:33:11.507 + vagrant destroy -f 00:33:14.803 ==> default: Removing domain... 00:33:15.147 [Pipeline] sh 00:33:15.431 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:33:15.440 [Pipeline] } 00:33:15.456 [Pipeline] // stage 00:33:15.461 [Pipeline] } 00:33:15.475 [Pipeline] // dir 00:33:15.480 [Pipeline] } 00:33:15.495 [Pipeline] // wrap 00:33:15.501 [Pipeline] } 00:33:15.514 [Pipeline] // catchError 00:33:15.524 [Pipeline] stage 00:33:15.526 [Pipeline] { (Epilogue) 00:33:15.539 [Pipeline] sh 00:33:15.821 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:22.405 [Pipeline] catchError 00:33:22.407 [Pipeline] { 00:33:22.421 [Pipeline] sh 00:33:22.705 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:22.705 Artifacts sizes are good 00:33:22.973 [Pipeline] } 00:33:22.987 [Pipeline] // catchError 00:33:22.999 [Pipeline] archiveArtifacts 00:33:23.006 Archiving artifacts 00:33:23.116 [Pipeline] cleanWs 00:33:23.128 [WS-CLEANUP] Deleting project workspace... 00:33:23.128 [WS-CLEANUP] Deferred wipeout is used... 00:33:23.135 [WS-CLEANUP] done 00:33:23.137 [Pipeline] } 00:33:23.153 [Pipeline] // stage 00:33:23.159 [Pipeline] } 00:33:23.173 [Pipeline] // node 00:33:23.178 [Pipeline] End of Pipeline 00:33:23.217 Finished: SUCCESS